2025-04-02 03:19:01,085 [ 587020 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-04-02 03:19:01,086 [ 587020 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:97, check_args_and_update_paths) 2025-04-02 03:19:01,086 [ 587020 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:108, check_args_and_update_paths) 2025-04-02 03:19:01,086 [ 587020 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:110, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_4wrvlh --privileged --dns-search='.' --memory=30709018624 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=8b2301119731 -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_postgresql_database_engine/test.py::test_datetime test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl test_postgresql_database_engine/test.py::test_postgres_database_old_syntax test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl test_postgresql_database_engine/test.py::test_postgresql_database_with_schema test_postgresql_database_engine/test.py::test_postgresql_fetch_tables test_postgresql_database_engine/test.py::test_postgresql_password_leak test_postgresql_database_engine/test.py::test_predefined_connection_configuration test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order test_prometheus_endpoint/test.py::test_prometheus_endpoint test_prometheus_protocols/test.py::test_64bit_id test_prometheus_protocols/test.py::test_create_as_table test_prometheus_protocols/test.py::test_custom_id_algorithm test_prometheus_protocols/test.py::test_default test_prometheus_protocols/test.py::test_external_tables test_prometheus_protocols/test.py::test_inner_engines test_prometheus_protocols/test.py::test_read_auth test_prometheus_protocols/test.py::test_remote_write_v1_status_code test_prometheus_protocols/test.py::test_tags_to_columns test_range_hashed_dictionary_types/test.py::test_range_hashed_dict test_read_only_table/test.py::test_restart_zookeeper test_recompression_ttl/test.py::test_recompression_multiple_ttls test_recompression_ttl/test.py::test_recompression_replicated test_recompression_ttl/test.py::test_recompression_simple test_recovery_time_metric/test.py::test_recovery_time_metric test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db test_refreshable_mv/test.py::test_refreshable_mv_in_system_db test_relative_filepath/test.py::test_filepath test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers test_reload_certificate/test.py::test_ECcert_reload test_reload_certificate/test.py::test_cert_with_pass_phrase test_reload_certificate/test.py::test_chain_reload test_reload_certificate/test.py::test_first_than_second_cert test_reload_clusters_config/test.py::test_add_cluster test_reload_clusters_config/test.py::test_delete_cluster test_reload_clusters_config/test.py::test_simple_reload test_reload_clusters_config/test.py::test_update_one_cluster test_reloading_settings_from_users_xml/test.py::test_force_reload test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3_plain]' test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4]' test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format test_render_log_file_name_templates/test.py::test_check_file_names test_replica_can_become_leader/test.py::test_can_become_leader test_replica_is_active/test.py::test_replica_is_active test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped test_replicating_constants/test.py::test_different_versions test_replication_credentials/test.py::test_credentials_and_no_credentials test_replication_credentials/test.py::test_different_credentials test_replication_credentials/test.py::test_no_credentials test_replication_credentials/test.py::test_same_credentials test_replication_without_zookeeper/test.py::test_startup_without_zookeeper test_restart_server/test.py::test_drop_memory_database test_restart_server/test.py::test_flushes_async_insert_queue test_restore_replica/test.py::test_restore_replica_alive_replicas test_restore_replica/test.py::test_restore_replica_invalid_tables test_restore_replica/test.py::test_restore_replica_parallel test_restore_replica/test.py::test_restore_replica_sequential test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop test_rocksdb_read_only/test.py::test_read_only test_role/test.py::test_admin_option test_role/test.py::test_changing_default_roles_affects_new_sessions_only test_role/test.py::test_combine_privileges test_role/test.py::test_create_role test_role/test.py::test_function_current_roles test_role/test.py::test_grant_role_to_role test_role/test.py::test_introspection test_role/test.py::test_revoke_requires_admin_option 'test_role/test.py::test_role_expiration[False]' 'test_role/test.py::test_role_expiration[True]' test_role/test.py::test_roles_cache test_role/test.py::test_set_role test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 'test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header]' test_s3_cluster/test.py::test_ambiguous_join test_s3_cluster/test.py::test_cluster_default_expression test_s3_cluster/test.py::test_cluster_format_detection test_s3_cluster/test.py::test_cluster_with_header test_s3_cluster/test.py::test_cluster_with_named_collection test_s3_cluster/test.py::test_count test_s3_cluster/test.py::test_count_macro test_s3_cluster/test.py::test_distributed_insert_select_with_replicated -vvv -ss" altinityinfra/integration-tests-runner:2165613c5fcd '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache Test order randomisation NOT enabled. Enable with --random-order or --random-order-bucket= rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: timeout-2.3.1, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0, random-order-1.1.1 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [100 items] scheduling tests via LoadFileScheduling Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] Command:[docker ps | wc -l] test_role/test.py::test_admin_option test_postgresql_database_engine/test.py::test_datetime Command:[docker ps | wc -l] Command:[docker ps | wc -l] test_prometheus_protocols/test.py::test_64bit_id test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3] test_replication_credentials/test.py::test_credentials_and_no_credentials test_reload_certificate/test.py::test_ECcert_reload test_s3_cluster/test.py::test_ambiguous_join test_reload_clusters_config/test.py::test_add_cluster test_restore_replica/test.py::test_restore_replica_alive_replicas test_reloading_settings_from_users_xml/test.py::test_force_reload Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Stdout:1 Stdout:1 No running containers Pruning Docker networks Stdout:1 No running containers Command:[docker network prune --force] Pruning Docker networks No running containers Command:[docker network prune --force] Pruning Docker networks Command:[docker network prune --force] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Stdout:1 No running containers Pruning Docker networks Stdout:1 Command:[docker network prune --force] No running containers Pruning Docker networks Command:[docker network prune --force] Stdout:1 No running containers Pruning Docker networks Command:[docker network prune --force] Stderr:Error response from daemon: a prune operation is already running Exitcode:1 Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Stderr:Error response from daemon: a prune operation is already running Exitcode:1 Stderr:Error response from daemon: a prune operation is already running Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Exitcode:1 Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Stdout:net.ipv4.ip_local_port_range = 55000 65535 Stderr:Error response from daemon: a prune operation is already running Running tests in /ClickHouse/tests/integration/test_postgresql_database_engine/test.py Exitcode:1 Cluster start called. is_up=False Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Stderr:Error response from daemon: a prune operation is already running Exitcode:1 Stdout:net.ipv4.ip_local_port_range = 55000 65535 Stdout:net.ipv4.ip_local_port_range = 55000 65535 Running tests in /ClickHouse/tests/integration/test_role/test.py ENV DOCKER_KERBEROS_KDC_TAG 9391ecdee8d7 Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] ENV CLICKHOUSE_TESTS_SERVER_BIN_PATH /clickhouse Cluster start called. is_up=False ENV MSAN_OPTIONS abort_on_error=1 poison_in_dtor=1 ENV JAVA_TOOL_OPTIONS -Djdk.attach.allowAttachSelf=true ENV TSAN_OPTIONS halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1 Stderr:Error response from daemon: a prune operation is already running ENV HOSTNAME 2360da140b68 ENV SHLVL 0 Stdout:net.ipv4.ip_local_port_range = 55000 65535 ENV HOME /root Exitcode:1 ENV OLDPWD / ENV DOCKER_HELPER_TAG 5dc43a6382f0 ENV PYTHONUNBUFFERED 1 ENV DOCKER_PYTHON_BOTTLE_TAG caad4729259e Running tests in /ClickHouse/tests/integration/test_reload_clusters_config/test.py ENV UBSAN_OPTIONS print_stacktrace=1 ENV PYTEST_ADDOPTS --dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_postgresql_database_engine/test.py::test_datetime test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl test_postgresql_database_engine/test.py::test_postgres_database_old_syntax test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl test_postgresql_database_engine/test.py::test_postgresql_database_with_schema test_postgresql_database_engine/test.py::test_postgresql_fetch_tables test_postgresql_database_engine/test.py::test_postgresql_password_leak test_postgresql_database_engine/test.py::test_predefined_connection_configuration test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order test_prometheus_endpoint/test.py::test_prometheus_endpoint test_prometheus_protocols/test.py::test_64bit_id test_prometheus_protocols/test.py::test_create_as_table test_prometheus_protocols/test.py::test_custom_id_algorithm test_prometheus_protocols/test.py::test_default test_prometheus_protocols/test.py::test_external_tables test_prometheus_protocols/test.py::test_inner_engines test_prometheus_protocols/test.py::test_read_auth test_prometheus_protocols/test.py::test_remote_write_v1_status_code test_prometheus_protocols/test.py::test_tags_to_columns test_range_hashed_dictionary_types/test.py::test_range_hashed_dict test_read_only_table/test.py::test_restart_zookeeper test_recompression_ttl/test.py::test_recompression_multiple_ttls test_recompression_ttl/test.py::test_recompression_replicated test_recompression_ttl/test.py::test_recompression_simple test_recovery_time_metric/test.py::test_recovery_time_metric test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db test_refreshable_mv/test.py::test_refreshable_mv_in_system_db test_relative_filepath/test.py::test_filepath test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers test_reload_certificate/test.py::test_ECcert_reload test_reload_certificate/test.py::test_cert_with_pass_phrase test_reload_certificate/test.py::test_chain_reload test_reload_certificate/test.py::test_first_than_second_cert test_reload_clusters_config/test.py::test_add_cluster test_reload_clusters_config/test.py::test_delete_cluster test_reload_clusters_config/test.py::test_simple_reload test_reload_clusters_config/test.py::test_update_one_cluster test_reloading_settings_from_users_xml/test.py::test_force_reload test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3_plain]' test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4]' test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format test_render_log_file_name_templates/test.py::test_check_file_names test_replica_can_become_leader/test.py::test_can_become_leader test_replica_is_active/test.py::test_replica_is_active test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped test_replicating_constants/test.py::test_different_versions test_replication_credentials/test.py::test_credentials_and_no_credentials test_replication_credentials/test.py::test_different_credentials test_replication_credentials/test.py::test_no_credentials test_replication_credentials/test.py::test_same_credentials test_replication_without_zookeeper/test.py::test_startup_without_zookeeper test_restart_server/test.py::test_drop_memory_database test_restart_server/test.py::test_flushes_async_insert_queue test_restore_replica/test.py::test_restore_replica_alive_replicas test_restore_replica/test.py::test_restore_replica_invalid_tables test_restore_replica/test.py::test_restore_replica_parallel test_restore_replica/test.py::test_restore_replica_sequential test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop test_rocksdb_read_only/test.py::test_read_only test_role/test.py::test_admin_option test_role/test.py::test_changing_default_roles_affects_new_sessions_only test_role/test.py::test_combine_privileges test_role/test.py::test_create_role test_role/test.py::test_function_current_roles test_role/test.py::test_grant_role_to_role test_role/test.py::test_introspection test_role/test.py::test_revoke_requires_admin_option 'test_role/test.py::test_role_expiration[False]' 'test_role/test.py::test_role_expiration[True]' test_role/test.py::test_roles_cache test_role/test.py::test_set_role test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 'test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header]' test_s3_cluster/test.py::test_ambiguous_join test_s3_cluster/test.py::test_cluster_default_expression test_s3_cluster/test.py::test_cluster_format_detection test_s3_cluster/test.py::test_cluster_with_header test_s3_cluster/test.py::test_cluster_with_named_collection test_s3_cluster/test.py::test_count test_s3_cluster/test.py::test_count_macro test_s3_cluster/test.py::test_distributed_insert_select_with_replicated -vvv -ss Cluster start called. is_up=False Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] ENV CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH /clickhouse-library-bridge ENV COMPOSE_HTTP_TIMEOUT 600 ENV DOCKER_MYSQL_PHP_CLIENT_TAG 88be89c1e3b6 ENV DOCKER_DOTNET_CLIENT_TAG 11de0b29a15d ENV CLICKHOUSE_TESTS_CLIENT_BIN_PATH /clickhouse ENV DOCKER_MYSQL_JS_CLIENT_TAG 41ba7c2ec2a1 ENV PATH /spark-3.3.2-bin-hadoop3/bin:/opt/gdb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ENV DOCKER_KERBERIZED_HADOOP_TAG latest ENV DOCKER_CHANNEL stable Stdout:net.ipv4.ip_local_port_range = 55000 65535 ENV DOCKER_CLIENT_TIMEOUT 300 ENV DOCKER_POSTGRESQL_JAVA_CLIENT_TAG a4eff5c7f4d6 ENV DOCKER_NGINX_DAV_TAG b55ac9cd7519 ENV DOCKER_MYSQL_GOLANG_CLIENT_TAG 9bec2a638e6e ENV PWD /ClickHouse/tests/integration ENV DOCKER_MYSQL_JAVA_CLIENT_TAG 766bff31cfe4 Stderr:Error response from daemon: a prune operation is already running Stderr:Error response from daemon: a prune operation is already running Running tests in /ClickHouse/tests/integration/test_reload_certificate/test.py ENV CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH /clickhouse-odbc-bridge ENV CLICKHOUSE_TESTS_BASE_CONFIG_DIR /clickhouse-config Cluster start called. is_up=False Exitcode:1 Exitcode:1 ENV TZ Etc/UTC ENV JAVA_PATH /usr/lib/jvm/java-11-openjdk-amd64/bin/java ENV DOCKER_BASE_TAG 8b2301119731 ENV SPARK_HOME /spark-3.3.2-bin-hadoop3 ENV LC_CTYPE C.UTF-8 Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] ENV INTEGRATION_TESTS_RUN_ID 0 ENV WORKER_FREE_PORTS 30150 30151 30152 30153 30154 30155 30156 30157 30158 30159 30160 30161 30162 30163 30164 30165 30166 30167 30168 30169 30170 30171 30172 30173 30174 30175 30176 30177 30178 30179 30180 30181 30182 30183 30184 30185 30186 30187 30188 30189 30190 30191 30192 30193 30194 30195 30196 30197 30198 30199 Stderr:Error response from daemon: a prune operation is already running ENV PYTEST_XDIST_TESTRUNUID 3b02e9a4cb3e459b869a34972bebb8ec ENV PYTEST_XDIST_WORKER gw3 ENV PYTEST_XDIST_WORKER_COUNT 10 ENV PYTEST_CURRENT_TEST test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3] (setup) Exitcode:1 Stdout:net.ipv4.ip_local_port_range = 55000 65535 Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] CLUSTER INIT base_config_dir:/clickhouse-config Running tests in /ClickHouse/tests/integration/test_replication_credentials/test.py clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster start called. is_up=False Setup Keeper Cluster name:backward_compatibility project_name:roottestremoteblobsnamingbackwardcompatibility-gw3. Added instance name:node tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env', '--project-name', 'roottestremoteblobsnamingbackwardcompatibility-gw3', '--file', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ Stdout:net.ipv4.ip_local_port_range = 55000 65535 clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log ENV DOCKER_KERBEROS_KDC_TAG 9391ecdee8d7 ENV CLICKHOUSE_TESTS_SERVER_BIN_PATH /clickhouse ENV MSAN_OPTIONS abort_on_error=1 poison_in_dtor=1 Cluster name:backward_compatibility project_name:roottestremoteblobsnamingbackwardcompatibility-gw3. Added instance name:new_node tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env', '--project-name', 'roottestremoteblobsnamingbackwardcompatibility-gw3', '--file', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ ENV JAVA_TOOL_OPTIONS -Djdk.attach.allowAttachSelf=true ENV TSAN_OPTIONS halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1 ENV HOSTNAME 2360da140b68 clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log ENV SHLVL 0 ENV HOME /root ENV OLDPWD / Cluster name:backward_compatibility project_name:roottestremoteblobsnamingbackwardcompatibility-gw3. Added instance name:switching_node tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env', '--project-name', 'roottestremoteblobsnamingbackwardcompatibility-gw3', '--file', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ ENV DOCKER_HELPER_TAG 5dc43a6382f0 ENV PYTHONUNBUFFERED 1 Starting cluster... ENV DOCKER_PYTHON_BOTTLE_TAG caad4729259e ENV UBSAN_OPTIONS print_stacktrace=1 ENV PYTEST_ADDOPTS --dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_postgresql_database_engine/test.py::test_datetime test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl test_postgresql_database_engine/test.py::test_postgres_database_old_syntax test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl test_postgresql_database_engine/test.py::test_postgresql_database_with_schema test_postgresql_database_engine/test.py::test_postgresql_fetch_tables test_postgresql_database_engine/test.py::test_postgresql_password_leak test_postgresql_database_engine/test.py::test_predefined_connection_configuration test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order test_prometheus_endpoint/test.py::test_prometheus_endpoint test_prometheus_protocols/test.py::test_64bit_id test_prometheus_protocols/test.py::test_create_as_table test_prometheus_protocols/test.py::test_custom_id_algorithm test_prometheus_protocols/test.py::test_default test_prometheus_protocols/test.py::test_external_tables test_prometheus_protocols/test.py::test_inner_engines test_prometheus_protocols/test.py::test_read_auth test_prometheus_protocols/test.py::test_remote_write_v1_status_code test_prometheus_protocols/test.py::test_tags_to_columns test_range_hashed_dictionary_types/test.py::test_range_hashed_dict test_read_only_table/test.py::test_restart_zookeeper test_recompression_ttl/test.py::test_recompression_multiple_ttls test_recompression_ttl/test.py::test_recompression_replicated test_recompression_ttl/test.py::test_recompression_simple test_recovery_time_metric/test.py::test_recovery_time_metric test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db test_refreshable_mv/test.py::test_refreshable_mv_in_system_db test_relative_filepath/test.py::test_filepath test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers test_reload_certificate/test.py::test_ECcert_reload test_reload_certificate/test.py::test_cert_with_pass_phrase test_reload_certificate/test.py::test_chain_reload test_reload_certificate/test.py::test_first_than_second_cert test_reload_clusters_config/test.py::test_add_cluster test_reload_clusters_config/test.py::test_delete_cluster test_reload_clusters_config/test.py::test_simple_reload test_reload_clusters_config/test.py::test_update_one_cluster test_reloading_settings_from_users_xml/test.py::test_force_reload test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3_plain]' test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4]' test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format test_render_log_file_name_templates/test.py::test_check_file_names test_replica_can_become_leader/test.py::test_can_become_leader test_replica_is_active/test.py::test_replica_is_active test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped test_replicating_constants/test.py::test_different_versions test_replication_credentials/test.py::test_credentials_and_no_credentials test_replication_credentials/test.py::test_different_credentials test_replication_credentials/test.py::test_no_credentials test_replication_credentials/test.py::test_same_credentials test_replication_without_zookeeper/test.py::test_startup_without_zookeeper test_restart_server/test.py::test_drop_memory_database test_restart_server/test.py::test_flushes_async_insert_queue test_restore_replica/test.py::test_restore_replica_alive_replicas test_restore_replica/test.py::test_restore_replica_invalid_tables test_restore_replica/test.py::test_restore_replica_parallel test_restore_replica/test.py::test_restore_replica_sequential test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop test_rocksdb_read_only/test.py::test_read_only test_role/test.py::test_admin_option test_role/test.py::test_changing_default_roles_affects_new_sessions_only test_role/test.py::test_combine_privileges test_role/test.py::test_create_role test_role/test.py::test_function_current_roles test_role/test.py::test_grant_role_to_role test_role/test.py::test_introspection test_role/test.py::test_revoke_requires_admin_option 'test_role/test.py::test_role_expiration[False]' 'test_role/test.py::test_role_expiration[True]' test_role/test.py::test_roles_cache test_role/test.py::test_set_role test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 'test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header]' test_s3_cluster/test.py::test_ambiguous_join test_s3_cluster/test.py::test_cluster_default_expression test_s3_cluster/test.py::test_cluster_format_detection test_s3_cluster/test.py::test_cluster_with_header test_s3_cluster/test.py::test_cluster_with_named_collection test_s3_cluster/test.py::test_count test_s3_cluster/test.py::test_count_macro test_s3_cluster/test.py::test_distributed_insert_select_with_replicated -vvv -ss ENV CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH /clickhouse-library-bridge ENV COMPOSE_HTTP_TIMEOUT 600 Running tests in /ClickHouse/tests/integration/test_remote_blobs_naming/test_backward_compatibility.py Stdout:net.ipv4.ip_local_port_range = 55000 65535 ENV DOCKER_MYSQL_PHP_CLIENT_TAG 88be89c1e3b6 ENV DOCKER_DOTNET_CLIENT_TAG 11de0b29a15d ENV CLICKHOUSE_TESTS_CLIENT_BIN_PATH /clickhouse Cluster start called. is_up=False ENV DOCKER_MYSQL_JS_CLIENT_TAG 41ba7c2ec2a1 ENV PATH /spark-3.3.2-bin-hadoop3/bin:/opt/gdb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ENV DOCKER_KERBERIZED_HADOOP_TAG latest Running tests in /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/test.py ENV DOCKER_CHANNEL stable ENV DOCKER_CLIENT_TIMEOUT 300 ENV DOCKER_POSTGRESQL_JAVA_CLIENT_TAG a4eff5c7f4d6 Cluster start called. is_up=False ENV DOCKER_NGINX_DAV_TAG b55ac9cd7519 ENV DOCKER_MYSQL_GOLANG_CLIENT_TAG 9bec2a638e6e ENV PWD /ClickHouse/tests/integration ENV DOCKER_MYSQL_JAVA_CLIENT_TAG 766bff31cfe4 ENV CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH /clickhouse-odbc-bridge ENV CLICKHOUSE_TESTS_BASE_CONFIG_DIR /clickhouse-config ENV TZ Etc/UTC ENV JAVA_PATH /usr/lib/jvm/java-11-openjdk-amd64/bin/java ENV DOCKER_BASE_TAG 8b2301119731 ENV SPARK_HOME /spark-3.3.2-bin-hadoop3 ENV LC_CTYPE C.UTF-8 ENV INTEGRATION_TESTS_RUN_ID 0 ENV WORKER_FREE_PORTS 30250 30251 30252 30253 30254 30255 30256 30257 30258 30259 30260 30261 30262 30263 30264 30265 30266 30267 30268 30269 30270 30271 30272 30273 30274 30275 30276 30277 30278 30279 30280 30281 30282 30283 30284 30285 30286 30287 30288 30289 30290 30291 30292 30293 30294 30295 30296 30297 30298 30299 ENV PYTEST_XDIST_TESTRUNUID 3b02e9a4cb3e459b869a34972bebb8ec ENV PYTEST_XDIST_WORKER gw5 ENV PYTEST_XDIST_WORKER_COUNT 10 ENV PYTEST_CURRENT_TEST test_s3_cluster/test.py::test_ambiguous_join (setup) CLUSTER INIT base_config_dir:/clickhouse-config clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Setup Keeper Cluster name: project_name:roottests3cluster-gw5. Added instance name:s0_0_0 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env', '--project-name', 'roottests3cluster-gw5', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottests3cluster-gw5. Added instance name:s0_0_1 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env', '--project-name', 'roottests3cluster-gw5', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ Stdout:net.ipv4.ip_local_port_range = 55000 65535 clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottests3cluster-gw5. Added instance name:s0_1_0 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env', '--project-name', 'roottests3cluster-gw5', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ Starting cluster... Running tests in /ClickHouse/tests/integration/test_s3_cluster/test.py Cluster start called. is_up=False Running tests in /ClickHouse/tests/integration/test_restore_replica/test.py Cluster start called. is_up=False Docker networks for project roottestrole-gw1 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestpostgresqldatabaseengine-gw0 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestreplicationcredentials-gw9 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottests3cluster-gw5 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestreloadingsettingsfromusersxml-gw4 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestreloadclustersconfig-gw7 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestremoteblobsnamingbackwardcompatibility-gw3 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestreloadcertificate-gw6 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestrestorereplica-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrole-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestpostgresqldatabaseengine-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottests3cluster-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestreplicationcredentials-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestreloadingsettingsfromusersxml-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestreloadclustersconfig-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestrestorereplica-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestremoteblobsnamingbackwardcompatibility-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestreloadcertificate-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrole-gw1 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestpostgresqldatabaseengine-gw0 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestreplicationcredentials-gw9 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottests3cluster-gw5 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestreloadingsettingsfromusersxml-gw4 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestreloadclustersconfig-gw7 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestremoteblobsnamingbackwardcompatibility-gw3 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestrestorereplica-gw8 are DRIVER VOLUME NAME Cleanup called Docker volumes for project roottestreloadcertificate-gw6 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestrole-gw1 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestpostgresqldatabaseengine-gw0 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestreplicationcredentials-gw9 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestreloadingsettingsfromusersxml-gw4 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestrestorereplica-gw8 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottests3cluster-gw5 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestremoteblobsnamingbackwardcompatibility-gw3 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestreloadcertificate-gw6 are NETWORK ID NAME DRIVER SCOPE Docker networks for project roottestreloadclustersconfig-gw7 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrole-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestpostgresqldatabaseengine-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestreplicationcredentials-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestreloadingsettingsfromusersxml-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestrestorereplica-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestreloadcertificate-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottestreloadclustersconfig-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker containers for project roottests3cluster-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestpostgresqldatabaseengine-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestpostgresqldatabaseengine-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestrole-gw1 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrole-gw1-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker containers for project roottestremoteblobsnamingbackwardcompatibility-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreloadingsettingsfromusersxml-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreloadingsettingsfromusersxml-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestreloadcertificate-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreloadcertificate-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestreloadclustersconfig-gw7 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreloadclustersconfig-gw7-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestrestorereplica-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrestorereplica-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottestreplicationcredentials-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicationcredentials-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Docker volumes for project roottests3cluster-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottests3cluster-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestpostgresqldatabaseengine-gw0 Trying to prune unused networks... Docker volumes for project roottestremoteblobsnamingbackwardcompatibility-gw3 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestremoteblobsnamingbackwardcompatibility-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreloadingsettingsfromusersxml-gw4 Trying to prune unused networks... Unstopped containers: {} No running containers for project: roottestrole-gw1 Trying to prune unused networks... Unstopped containers: {} No running containers for project: roottestreloadclustersconfig-gw7 Unstopped containers: {} Trying to prune unused networks... No running containers for project: roottestreloadcertificate-gw6 Trying to prune unused networks... Unstopped containers: {} No running containers for project: roottestrestorereplica-gw8 Trying to prune unused networks... Unstopped containers: {} No running containers for project: roottests3cluster-gw5 Trying to prune unused networks... Unstopped containers: {} Unstopped containers: {} No running containers for project: roottestremoteblobsnamingbackwardcompatibility-gw3 No running containers for project: roottestreplicationcredentials-gw9 Trying to prune unused networks... Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Command:[docker image prune -f] Trying to prune unused images... Trying to prune unused images... Command:[docker image prune -f] Command:[docker image prune -f] Stderr:Error response from daemon: a prune operation is already running Exitcode:1 Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stderr:Error response from daemon: a prune operation is already running Stderr:Error response from daemon: a prune operation is already running Exitcode:1 Trying to prune unused volumes... Command:[docker volume ls | wc -l] Exitcode:1 Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stderr:Error response from daemon: a prune operation is already running Stderr:Error response from daemon: a prune operation is already running Exitcode:1 Exitcode:1 Trying to prune unused volumes... Trying to prune unused volumes... Command:[docker volume ls | wc -l] Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_postgresql_database_engine/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/database Setup logs dir /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'POSTGRES_PORT': '5432', 'POSTGRES_DIR': '/ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/postgres/postgres1', 'POSTGRES_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Stdout:1 No config file found Volumes pruned: 1 Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Setup directory for instance: instance No config file found Stdout:Total reclaimed space: 0B Create directory for configuration generated in this helper Images pruned Create directory for common tests configuration Trying to prune unused volumes... Command:[docker volume ls | wc -l] Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/configs/config.d Setup database dir /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/database Setup logs dir /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_role/_instances-0-gw1/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Stdout:1 Volumes pruned: 1 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file http://localhost:None "GET /version HTTP/1.1" 200 826 Copy custom test config files ['/ClickHouse/tests/integration/test_reload_certificate/configs/first.crt', '/ClickHouse/tests/integration/test_reload_certificate/configs/first.key', '/ClickHouse/tests/integration/test_reload_certificate/configs/second.crt', '/ClickHouse/tests/integration/test_reload_certificate/configs/second.key', '/ClickHouse/tests/integration/test_reload_certificate/configs/ECcert.crt', '/ClickHouse/tests/integration/test_reload_certificate/configs/ECcert.key', '/ClickHouse/tests/integration/test_reload_certificate/configs/WithChain.crt', '/ClickHouse/tests/integration/test_reload_certificate/configs/WithChain.key', '/ClickHouse/tests/integration/test_reload_certificate/configs/WithPassPhrase.crt', '/ClickHouse/tests/integration/test_reload_certificate/configs/WithPassPhrase.key', '/ClickHouse/tests/integration/test_reload_certificate/configs/cert.xml'] to /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/configs/config.d Command:[docker compose --env-file /ClickHouse/tests/integration/test_role/_instances-0-gw1/.env --project-name roottestrole-gw1 --file /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/docker-compose.yml pull] Stdout:1 Volumes pruned: 1 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers http://localhost:None "GET /version HTTP/1.1" 200 826 Stdout:1 Generate and write macros file Volumes pruned: 1 Setup directory for instance: node Copy custom test config files ['/ClickHouse/tests/integration/test_replication_credentials/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_replication_credentials/configs/credentials1.xml'] to /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node1/configs/config.d Create directory for configuration generated in this helper Command:[docker compose --env-file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/.env --project-name roottestpostgresqldatabaseengine-gw0 --file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml pull] Create directory for common tests configuration Stdout:1 Copy common configuration from helpers Setup database dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node1/database Setup logs dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Generate and write macros file Setup directory for instance: node2 Copy custom test config files [] to /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/configs/config.d Create directory for configuration generated in this helper Create directory for common tests configuration Setup database dir /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/database Setup database dir /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/database Copy common configuration from helpers Setup logs dir /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/logs Setup logs dir /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Generate and write macros file Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/.env Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/.env Stdout:1 Volumes pruned: 1 Setup directory for instance: replica1 Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Copy custom test config files ['/ClickHouse/tests/integration/test_replication_credentials/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_replication_credentials/configs/credentials1.xml'] to /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node2/configs/config.d Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Setup database dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node2/database No config file found Setup logs dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node3 Create directory for configuration generated in this helper Create directory for common tests configuration Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Copy common configuration from helpers Generate and write macros file Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_replication_credentials/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_replication_credentials/configs/no_credentials.xml'] to /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node3/configs/config.d Copy custom test config files ['/ClickHouse/tests/integration/test_restore_replica/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node3/database Setup logs dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node3/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node4 Setup database dir /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/database Setup logs dir /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/logs Create directory for configuration generated in this helper Create directory for common tests configuration Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Copy common configuration from helpers Setup directory for instance: replica2 Generate and write macros file Create directory for configuration generated in this helper Copy custom test config files ['/ClickHouse/tests/integration/test_replication_credentials/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_replication_credentials/configs/no_credentials.xml'] to /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node4/configs/config.d Create directory for common tests configuration Setup database dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node4/database Setup logs dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node4/logs Copy common configuration from helpers Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Stdout:1 Setup directory for instance: node5 Volumes pruned: 1 Setup directory for instance: node Create directory for configuration generated in this helper Generate and write macros file Create directory for common tests configuration Copy common configuration from helpers Create directory for configuration generated in this helper Create directory for common tests configuration Copy custom test config files ['/ClickHouse/tests/integration/test_restore_replica/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/configs/config.d Generate and write macros file Copy common configuration from helpers Setup database dir /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/database Copy custom test config files ['/ClickHouse/tests/integration/test_replication_credentials/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_replication_credentials/configs/credentials1.xml'] to /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node5/configs/config.d Setup logs dir /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup database dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node5/database Generate and write macros file Setup logs dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node5/logs Setup directory for instance: replica3 Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node6 Create directory for configuration generated in this helper Copy custom test config files ['/ClickHouse/tests/integration/test_remote_blobs_naming/configs/old_node.xml', '/ClickHouse/tests/integration/test_remote_blobs_naming/configs/storage_conf.xml'] to /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/configs/config.d Create directory for configuration generated in this helper Create directory for common tests configuration Create directory for common tests configuration Copy common configuration from helpers Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_replication_credentials/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_replication_credentials/configs/credentials2.xml'] to /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node6/configs/config.d Generate and write macros file Setup database dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node6/database Setup logs dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node6/logs Setup database dir /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/database Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Copy custom test config files ['/ClickHouse/tests/integration/test_restore_replica/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/configs/config.d Setup logs dir /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/logs Setup directory for instance: node7 Setup database dir /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/database Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup logs dir /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/logs Setup directory for instance: new_node Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Create directory for configuration generated in this helper Create directory for common tests configuration Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/.env http://localhost:None "GET /version HTTP/1.1" 200 826 Create directory for configuration generated in this helper Copy common configuration from helpers Create directory for common tests configuration http://localhost:None "GET /version HTTP/1.1" 200 826 Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Copy common configuration from helpers Generate and write macros file No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/.env --project-name roottestreloadcertificate-gw6 --file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/docker-compose.yml pull] No config file found Copy custom test config files ['/ClickHouse/tests/integration/test_replication_credentials/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_replication_credentials/configs/credentials1.xml'] to /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node7/configs/config.d Generate and write macros file Setup database dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node7/database Setup logs dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node7/logs Command:[docker compose --env-file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/.env --project-name roottestreloadingsettingsfromusersxml-gw4 --file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/docker-compose.yml pull] Copy custom test config files ['/ClickHouse/tests/integration/test_remote_blobs_naming/configs/new_node.xml', '/ClickHouse/tests/integration/test_remote_blobs_naming/configs/storage_conf_new.xml'] to /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/configs/config.d Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node8 Stdout:1 Volumes pruned: 1 Create directory for configuration generated in this helper Create directory for common tests configuration Setup directory for instance: node Copy common configuration from helpers Generate and write macros file Setup database dir /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/database Setup logs dir /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/logs Create directory for configuration generated in this helper Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Create directory for common tests configuration Copy custom test config files ['/ClickHouse/tests/integration/test_replication_credentials/configs/remote_servers.xml', '/ClickHouse/tests/integration/test_replication_credentials/configs/no_credentials.xml'] to /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node8/configs/config.d Setup directory for instance: switching_node Copy common configuration from helpers Setup database dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node8/database Setup logs dir /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node8/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Create directory for configuration generated in this helper Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/.env Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Copy custom test config files ['/ClickHouse/tests/integration/test_reload_clusters_config/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/configs/config.d Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_remote_blobs_naming/configs/switching_node.xml', '/ClickHouse/tests/integration/test_remote_blobs_naming/configs/storage_conf.xml'] to /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/database Setup logs dir /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/.env Setup database dir /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/database Setup logs dir /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper3/coordination', 'MINIO_CERTS_DIR': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/minio/certs', 'MINIO_DATA_DIR': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/minio/data', 'MINIO_PORT': '9001', 'SSL_CERT_FILE': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/minio/certs/public.crt', 'RESOLVER_LOGS': '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/resolver', 'RESOLVER_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/.env --project-name roottestrestorereplica-gw8 --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/docker-compose.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/docker-compose.yml pull] Volumes pruned: 1 Setup directory for instance: s0_0_0 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file http://localhost:None "GET /version HTTP/1.1" 200 826 Copy custom test config files ['/ClickHouse/tests/integration/test_s3_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_s3_cluster/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/configs/config.d http://localhost:None "GET /version HTTP/1.1" 200 826 http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/.env --project-name roottestreloadclustersconfig-gw7 --file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml pull] Command:[docker compose --env-file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/.env --project-name roottestreplicationcredentials-gw9 --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node3/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node4/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node5/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node6/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node7/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node8/docker-compose.yml pull] Setup database dir /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/database Setup logs dir /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Command:[docker compose --env-file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env --project-name roottestremoteblobsnamingbackwardcompatibility-gw3 --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/docker-compose.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/docker-compose.yml pull] Setup directory for instance: s0_0_1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_s3_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_s3_cluster/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/database Setup logs dir /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: s0_1_0 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_s3_cluster/configs/cluster.xml', '/ClickHouse/tests/integration/test_s3_cluster/configs/named_collections.xml'] to /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/configs/config.d Setup database dir /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/database Setup logs dir /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper3/coordination', 'MINIO_CERTS_DIR': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/minio/certs', 'MINIO_DATA_DIR': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/minio/data', 'MINIO_PORT': '9001', 'SSL_CERT_FILE': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/minio/certs/public.crt', 'RESOLVER_LOGS': '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/resolver', 'RESOLVER_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env --project-name roottests3cluster-gw5 --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/docker-compose.yml pull] Stdout:Deleted Networks: Stdout:roottestmysqlprotocol-gw0_default Stdout: Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] Stdout:net.ipv4.ip_local_port_range = 55000 65535 Running tests in /ClickHouse/tests/integration/test_prometheus_protocols/test.py Cluster start called. is_up=False Docker networks for project roottestprometheusprotocols-gw2 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestprometheusprotocols-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestprometheusprotocols-gw2 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestprometheusprotocols-gw2 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestprometheusprotocols-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestprometheusprotocols-gw2 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestprometheusprotocols-gw2-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestprometheusprotocols-gw2 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_prometheus_protocols/configs/prometheus.xml'] to /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/node/database Setup logs dir /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'PROMETHEUS_WRITER_HOST': 'prometheus_writer', 'PROMETHEUS_WRITER_PORT': '9090', 'PROMETHEUS_WRITER_LOGS': '/ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/prometheus_writer/logs', 'PROMETHEUS_WRITER_LOGS_FS': 'bind', 'PROMETHEUS_READER_HOST': 'prometheus_reader', 'PROMETHEUS_READER_PORT': '9091', 'PROMETHEUS_READER_LOGS': '/ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/prometheus_reader/logs', 'PROMETHEUS_READER_LOGS_FS': 'bind', 'PROMETHEUS_REMOTE_WRITE_HANDLER': 'http://node:9092/write', 'PROMETHEUS_REMOTE_READ_HANDLER': 'http://node:9092/read'} stored in /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/.env --project-name roottestprometheusprotocols-gw2 --file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_prometheus.yml pull] Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/.env --project-name roottestreloadcertificate-gw6 --file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/.env --project-name roottestreloadcertificate-gw6 --file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/docker-compose.yml up -d --no-recreate] Stderr: replica2 Skipped - Image is already being pulled by zoo2 Stderr: replica3 Skipped - Image is already being pulled by zoo2 Stderr: replica1 Skipped - Image is already being pulled by zoo2 Stderr: zoo3 Skipped - Image is already being pulled by zoo2 Stderr: zoo1 Skipped - Image is already being pulled by zoo2 Stderr: zoo2 Pulling Stderr: zoo2 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper1/log', '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper1/config', '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper1/coordination', '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper2/log', '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper2/config', '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper2/coordination', '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper3/log', '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper3/config', '/ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/keeper3/coordination'] Command:[docker compose --project-name roottestrestorereplica-gw8 --env-file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr: instance Pulling Stderr: instance Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_role/_instances-0-gw1/.env --project-name roottestrole-gw1 --file /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_role/_instances-0-gw1/.env --project-name roottestrole-gw1 --file /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/docker-compose.yml up -d --no-recreate] Stderr: node4 Skipped - Image is already being pulled by node5 Stderr: zoo3 Skipped - Image is already being pulled by node5 Stderr: node8 Skipped - Image is already being pulled by node5 Stderr: node7 Skipped - Image is already being pulled by node5 Stderr: node1 Skipped - Image is already being pulled by node5 Stderr: node2 Skipped - Image is already being pulled by node5 Stderr: zoo1 Skipped - Image is already being pulled by node5 Stderr: node6 Skipped - Image is already being pulled by node5 Stderr: node3 Skipped - Image is already being pulled by node5 Stderr: zoo2 Skipped - Image is already being pulled by node5 Stderr: node5 Pulling Stderr: node5 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper1/log', '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper1/config', '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper1/coordination', '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper2/log', '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper2/config', '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper2/coordination', '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper3/log', '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper3/config', '/ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/keeper3/coordination'] Command:[docker compose --project-name roottestreplicationcredentials-gw9 --env-file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr: zoo2 Skipped - Image is already being pulled by zoo1 Stderr: zoo3 Skipped - Image is already being pulled by zoo1 Stderr: node Skipped - Image is already being pulled by zoo1 Stderr: zoo1 Pulling Stderr: zoo1 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper1/log', '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper1/config', '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper1/coordination', '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper2/log', '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper2/config', '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper2/coordination', '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper3/log', '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper3/config', '/ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/keeper3/coordination'] Command:[docker compose --project-name roottestreloadclustersconfig-gw7 --env-file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr: proxy2 Skipped - Image is already being pulled by proxy1 Stderr: s0_0_0 Skipped - Image is already being pulled by s0_0_1 Stderr: zoo2 Skipped - Image is already being pulled by s0_0_1 Stderr: s0_1_0 Skipped - Image is already being pulled by s0_0_1 Stderr: zoo3 Skipped - Image is already being pulled by s0_0_1 Stderr: zoo1 Skipped - Image is already being pulled by s0_0_1 Stderr: resolver Pulling Stderr: s0_0_1 Pulling Stderr: minio1 Pulling Stderr: proxy1 Pulling Stderr: proxy1 Pulled Stderr: minio1 Pulled Stderr: s0_0_1 Pulled Stderr: resolver Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper1/log', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper1/config', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper1/coordination', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper2/log', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper2/config', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper2/coordination', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper3/log', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper3/config', '/ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/keeper3/coordination'] Command:[docker compose --project-name roottests3cluster-gw5 --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr: node1 Pulling Stderr: postgres1 Pulling Stderr: node1 Pulled Stderr: postgres1 Pulled Setup Postgres Command:[docker compose --project-name roottestpostgresqldatabaseengine-gw0 --env-file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml --verbose up -d] Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/.env --project-name roottestreloadingsettingsfromusersxml-gw4 --file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/.env --project-name roottestreloadingsettingsfromusersxml-gw4 --file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/docker-compose.yml up -d --no-recreate] Stderr: Network roottestreloadcertificate-gw6_default Creating Stderr: Network roottestreloadcertificate-gw6_default Created Stderr: Container roottestreloadcertificate-gw6-node-1 Creating Stderr: Container roottestreloadcertificate-gw6-node-1 Created Stderr: Container roottestreloadcertificate-gw6-node-1 Starting Stderr: Container roottestreloadcertificate-gw6-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestreloadcertificate-gw6-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestreloadcertificate-gw6-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.1.2... http://localhost:None "GET /v1.46/containers/roottestreloadcertificate-gw6-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None Stderr: proxy2 Skipped - Image is already being pulled by proxy1 Stderr: zoo2 Skipped - Image is already being pulled by new_node Stderr: zoo3 Skipped - Image is already being pulled by new_node Stderr: zoo1 Skipped - Image is already being pulled by new_node Stderr: switching_node Skipped - Image is already being pulled by new_node Stderr: node Skipped - Image is already being pulled by new_node Stderr: resolver Pulling Stderr: proxy1 Pulling Stderr: new_node Pulling Stderr: minio1 Pulling Stderr: new_node Pulled Stderr: proxy1 Pulled Stderr: minio1 Pulled Stderr: resolver Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper1/log', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper1/config', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper1/coordination', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper2/log', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper2/config', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper2/coordination', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper3/log', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper3/config', '/ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/keeper3/coordination'] Command:[docker compose --project-name roottestremoteblobsnamingbackwardcompatibility-gw3 --env-file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None Stderr:time="2025-04-02T03:19:18Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestrestorereplica-gw8_default Creating Stderr: Network roottestrestorereplica-gw8_default Created Stderr: Container roottestrestorereplica-gw8-zoo3-1 Creating Stderr: Container roottestrestorereplica-gw8-zoo1-1 Creating Stderr: Container roottestrestorereplica-gw8-zoo2-1 Creating Stderr: Container roottestrestorereplica-gw8-zoo1-1 Created Stderr: Container roottestrestorereplica-gw8-zoo3-1 Created Stderr: Container roottestrestorereplica-gw8-zoo2-1 Created Stderr: Container roottestrestorereplica-gw8-zoo2-1 Starting Stderr: Container roottestrestorereplica-gw8-zoo3-1 Starting Stderr: Container roottestrestorereplica-gw8-zoo1-1 Starting Stderr: Container roottestrestorereplica-gw8-zoo2-1 Started Stderr: Container roottestrestorereplica-gw8-zoo1-1 Started Stderr: Container roottestrestorereplica-gw8-zoo3-1 Started Stderr:time="2025-04-02T03:19:19Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:19:19Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.2.4, port:2181, use_ssl:False Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None Stderr: Network roottestrole-gw1_default Creating Stderr: Network roottestrole-gw1_default Created Stderr: Container roottestrole-gw1-instance-1 Creating Stderr: Container roottestrole-gw1-instance-1 Created Stderr: Container roottestrole-gw1-instance-1 Starting Stderr: Container roottestrole-gw1-instance-1 Started ClickHouse instance created get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestrole-gw1-instance-1/json HTTP/1.1" 200 None get_instance_ip instance_name=instance http://localhost:None "GET /v1.46/containers/roottestrole-gw1-instance-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in instance, ip: 172.16.3.2... http://localhost:None "GET /v1.46/containers/roottestrole-gw1-instance-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None Stderr:time="2025-04-02T03:19:18Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestreloadclustersconfig-gw7_default Creating Stderr: Network roottestreloadclustersconfig-gw7_default Created Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Creating Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Creating Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Creating Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Created Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Created Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Created Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Starting Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Starting Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Starting Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Started Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Started Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Started Stderr:time="2025-04-02T03:19:19Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:19:19Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreloadclustersconfig-gw7-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.4.2, port:2181, use_ssl:False Connecting to 172.16.4.2(172.16.4.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None Stderr: prometheus_reader Skipped - Image is already being pulled by prometheus_writer Stderr: prometheus_writer Pulling Stderr: node Pulling Stderr: 9fa9226be034 Pulling fs layer Stderr: 1617e25568b2 Pulling fs layer Stderr: 52e274219e9a Pulling fs layer Stderr: 3d2f97fbf1fd Pulling fs layer Stderr: 4074b1353672 Pulling fs layer Stderr: 5425e01d7f3c Pulling fs layer Stderr: 0926657f3b6b Pulling fs layer Stderr: c9ecc1017088 Pulling fs layer Stderr: 238f9bf935c9 Pulling fs layer Stderr: 794f1dd56e5b Pulling fs layer Stderr: c8699fb3f236 Pulling fs layer Stderr: a9784cd47caf Pulling fs layer Stderr: c9ecc1017088 Waiting Stderr: 4074b1353672 Waiting Stderr: 794f1dd56e5b Waiting Stderr: 5425e01d7f3c Waiting Stderr: c8699fb3f236 Waiting Stderr: 3d2f97fbf1fd Waiting Stderr: a9784cd47caf Waiting Stderr: 0926657f3b6b Waiting Stderr: 9fa9226be034 Downloading [> ] 13.78kB/783kB Stderr: 52e274219e9a Downloading [> ] 528.9kB/52.69MB Stderr: 9fa9226be034 Downloading [==================================================>] 783kB/783kB Stderr: 9fa9226be034 Verifying Checksum Stderr: 9fa9226be034 Download complete Stderr: 1617e25568b2 Downloading [=> ] 13.78kB/480.9kB Stderr: 9fa9226be034 Extracting [==> ] 32.77kB/783kB Stderr: 1617e25568b2 Download complete Stderr: 3d2f97fbf1fd Downloading [> ] 498.7kB/47.38MB Stderr: 4074b1353672 Downloading [==================================================>] 604B/604B Stderr: 4074b1353672 Verifying Checksum Stderr: 4074b1353672 Download complete Stderr: 9fa9226be034 Extracting [==================================================>] 783kB/783kB Stderr: 5425e01d7f3c Downloading [==================================================>] 2.677kB/2.677kB Stderr: 5425e01d7f3c Verifying Checksum Stderr: 5425e01d7f3c Download complete Stderr: 0926657f3b6b Downloading [==================================================>] 3.088kB/3.088kB Stderr: 0926657f3b6b Verifying Checksum Stderr: 0926657f3b6b Download complete Stderr: c9ecc1017088 Downloading [==================================================>] 4.022kB/4.022kB Stderr: c9ecc1017088 Verifying Checksum Stderr: c9ecc1017088 Download complete Stderr: 238f9bf935c9 Downloading [==================================================>] 1.441kB/1.441kB Stderr: 238f9bf935c9 Verifying Checksum Stderr: 238f9bf935c9 Download complete Stderr: 794f1dd56e5b Downloading [=> ] 3.645kB/138.8kB Stderr: 794f1dd56e5b Download complete Stderr: c8699fb3f236 Downloading [==================================================>] 100B/100B Stderr: c8699fb3f236 Verifying Checksum Stderr: c8699fb3f236 Download complete Stderr: a9784cd47caf Downloading [==================================================>] 723B/723B Stderr: a9784cd47caf Verifying Checksum Stderr: a9784cd47caf Download complete Stderr: 9fa9226be034 Pull complete Stderr: 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB Stderr: 52e274219e9a Downloading [================================> ] 34.44MB/52.69MB Stderr: 3d2f97fbf1fd Downloading [=====================================> ] 35.39MB/47.38MB Stderr: 3d2f97fbf1fd Verifying Checksum Stderr: 3d2f97fbf1fd Download complete Stderr: 52e274219e9a Verifying Checksum Stderr: 52e274219e9a Download complete Stderr: 1617e25568b2 Extracting [============================================> ] 426kB/480.9kB Stderr: 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB Stderr: 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB Stderr: 1617e25568b2 Pull complete Stderr: 52e274219e9a Extracting [> ] 557.1kB/52.69MB Stderr: node Pulled Stderr: 52e274219e9a Extracting [=====> ] 5.571MB/52.69MB Stderr: 52e274219e9a Extracting [============> ] 12.81MB/52.69MB Stderr: 52e274219e9a Extracting [===================> ] 20.05MB/52.69MB Stderr: 52e274219e9a Extracting [=======================> ] 25.07MB/52.69MB Stderr: 52e274219e9a Extracting [=====================================> ] 39.55MB/52.69MB Stderr: 52e274219e9a Extracting [==================================================>] 52.69MB/52.69MB Stderr: 52e274219e9a Pull complete Connecting to 172.16.4.2(172.16.4.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stderr: 3d2f97fbf1fd Extracting [> ] 491.5kB/47.38MB Stderr: 3d2f97fbf1fd Extracting [=======> ] 6.881MB/47.38MB Stderr: 3d2f97fbf1fd Extracting [=============> ] 12.78MB/47.38MB Stderr: 3d2f97fbf1fd Extracting [==================> ] 17.2MB/47.38MB Stderr: 3d2f97fbf1fd Extracting [===============================> ] 29.49MB/47.38MB Stderr: 3d2f97fbf1fd Extracting [==================================================>] 47.38MB/47.38MB Stderr: 3d2f97fbf1fd Pull complete Stderr: 4074b1353672 Extracting [==================================================>] 604B/604B Stderr: 4074b1353672 Extracting [==================================================>] 604B/604B Stderr: 4074b1353672 Pull complete Stderr: 5425e01d7f3c Extracting [==================================================>] 2.677kB/2.677kB Stderr: 5425e01d7f3c Extracting [==================================================>] 2.677kB/2.677kB Stderr: 5425e01d7f3c Pull complete Stderr: 0926657f3b6b Extracting [==================================================>] 3.088kB/3.088kB Stderr: 0926657f3b6b Extracting [==================================================>] 3.088kB/3.088kB Stderr: 0926657f3b6b Pull complete Stderr: c9ecc1017088 Extracting [==================================================>] 4.022kB/4.022kB Stderr: c9ecc1017088 Extracting [==================================================>] 4.022kB/4.022kB Stderr: c9ecc1017088 Pull complete Stderr: 238f9bf935c9 Extracting [==================================================>] 1.441kB/1.441kB Stderr: 238f9bf935c9 Extracting [==================================================>] 1.441kB/1.441kB Stderr: 238f9bf935c9 Pull complete Stderr: 794f1dd56e5b Extracting [===========> ] 32.77kB/138.8kB Stderr: 794f1dd56e5b Extracting [==================================================>] 138.8kB/138.8kB Stderr: 794f1dd56e5b Extracting [==================================================>] 138.8kB/138.8kB Stderr: 794f1dd56e5b Pull complete Stderr: c8699fb3f236 Extracting [==================================================>] 100B/100B Stderr: c8699fb3f236 Extracting [==================================================>] 100B/100B Stderr: c8699fb3f236 Pull complete Stderr: a9784cd47caf Extracting [==================================================>] 723B/723B Stderr: a9784cd47caf Extracting [==================================================>] 723B/723B Stderr: a9784cd47caf Pull complete Stderr: prometheus_writer Pulled Trying to create Prometheus instances by command docker compose --project-name roottestprometheusprotocols-gw2 --env-file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_prometheus.yml --verbose up -d Command:[docker compose --project-name roottestprometheusprotocols-gw2 --env-file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_prometheus.yml --verbose up -d] http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b4cd81622871d854581dae715f2df3b6ce154c05e76d3bc3e24388b927e9151c/json HTTP/1.1" 200 None ClickHouse node started run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/cert.xml << EOF\n\n 8443\n \n \n /etc/clickhouse-server/config.d/first.crt\n /etc/clickhouse-server/config.d/first.key\n true\n true\n sslv2,sslv3\n true\n \n \n \n\nEOF'] Command:[docker exec roottestreloadcertificate-gw6-node-1 bash -c cat > /etc/clickhouse-server/config.d/cert.xml << EOF 8443 /etc/clickhouse-server/config.d/first.crt /etc/clickhouse-server/config.d/first.key true true sslv2,sslv3 true EOF] Connecting to 172.16.4.2(172.16.4.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None Stderr:time="2025-04-02T03:19:18Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestreplicationcredentials-gw9_default Creating Stderr: Network roottestreplicationcredentials-gw9_default Created Stderr: Container roottestreplicationcredentials-gw9-zoo2-1 Creating Stderr: Container roottestreplicationcredentials-gw9-zoo3-1 Creating Stderr: Container roottestreplicationcredentials-gw9-zoo1-1 Creating Stderr: Container roottestreplicationcredentials-gw9-zoo3-1 Created Stderr: Container roottestreplicationcredentials-gw9-zoo1-1 Created Stderr: Container roottestreplicationcredentials-gw9-zoo2-1 Created Stderr: Container roottestreplicationcredentials-gw9-zoo2-1 Starting Stderr: Container roottestreplicationcredentials-gw9-zoo3-1 Starting Stderr: Container roottestreplicationcredentials-gw9-zoo1-1 Starting Stderr: Container roottestreplicationcredentials-gw9-zoo1-1 Started Stderr: Container roottestreplicationcredentials-gw9-zoo2-1 Started Stderr: Container roottestreplicationcredentials-gw9-zoo3-1 Started Stderr:time="2025-04-02T03:19:20Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:19:20Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.5.2, port:2181, use_ssl:False Connecting to 172.16.5.2(172.16.5.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None Connecting to 172.16.5.2(172.16.5.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None Connecting to 172.16.5.2(172.16.5.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stderr:time="2025-04-02T03:19:18Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottests3cluster-gw5_default Creating Stderr: Network roottests3cluster-gw5_default Created Stderr: Container roottests3cluster-gw5-zoo3-1 Creating Stderr: Container roottests3cluster-gw5-zoo1-1 Creating Stderr: Container roottests3cluster-gw5-zoo2-1 Creating Stderr: Container roottests3cluster-gw5-zoo1-1 Created Stderr: Container roottests3cluster-gw5-zoo2-1 Created Stderr: Container roottests3cluster-gw5-zoo3-1 Created Stderr: Container roottests3cluster-gw5-zoo1-1 Starting Stderr: Container roottests3cluster-gw5-zoo2-1 Starting Stderr: Container roottests3cluster-gw5-zoo3-1 Starting Stderr: Container roottests3cluster-gw5-zoo1-1 Started Stderr: Container roottests3cluster-gw5-zoo3-1 Started Stderr: Container roottests3cluster-gw5-zoo2-1 Started Stderr:time="2025-04-02T03:19:20Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:19:20Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 Connecting to 172.16.4.2(172.16.4.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.6.3, port:2181, use_ssl:False Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None Stderr:time="2025-04-02T03:19:18Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestpostgresqldatabaseengine-gw0_default Creating Stderr: Network roottestpostgresqldatabaseengine-gw0_default Created Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Creating Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Created Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Starting Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Started Stderr:time="2025-04-02T03:19:20Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:19:20Z" level=debug msg="otel error" error="" get_instance_ip instance_name=postgres1 http://localhost:None "GET /v1.46/containers/roottestpostgresqldatabaseengine-gw0-postgres1-1/json HTTP/1.1" 200 None Can't connect to Postgres connection to server at "172.16.7.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/first.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/first.crt https://localhost:8443/] Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a3eeb924630af6b22d8ff23cd099bbdf594656b9d943ac0166cd15ab40a4ce8f/json HTTP/1.1" 200 None ClickHouse instance started Executing query CREATE TABLE test_table(x UInt32, y UInt32) ENGINE = MergeTree ORDER BY tuple() on instance Stderr: Network roottestreloadingsettingsfromusersxml-gw4_default Creating Stderr: Network roottestreloadingsettingsfromusersxml-gw4_default Created Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Creating Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Created Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Starting Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestreloadingsettingsfromusersxml-gw4-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestreloadingsettingsfromusersxml-gw4-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.8.2... http://localhost:None "GET /v1.46/containers/roottestreloadingsettingsfromusersxml-gw4-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Stdout:Ok. run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/ECcert.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/ECcert.crt https://localhost:8443/] Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Exitcode:60 run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/cert.xml << EOF\n\n 8443\n \n \n /etc/clickhouse-server/config.d/ECcert.crt\n /etc/clickhouse-server/config.d/ECcert.key\n true\n true\n sslv2,sslv3\n true\n \n \n \n\nEOF'] Command:[docker exec roottestreloadcertificate-gw6-node-1 bash -c cat > /etc/clickhouse-server/config.d/cert.xml << EOF 8443 /etc/clickhouse-server/config.d/ECcert.crt /etc/clickhouse-server/config.d/ECcert.key true true sslv2,sslv3 true EOF] Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Connecting to 172.16.5.2(172.16.5.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Stderr:time="2025-04-02T03:19:18Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestremoteblobsnamingbackwardcompatibility-gw3_default Creating Stderr: Network roottestremoteblobsnamingbackwardcompatibility-gw3_default Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Started Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Started Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Started Stderr:time="2025-04-02T03:19:21Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:19:21Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.9.3, port:2181, use_ssl:False Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Can't connect to Postgres connection to server at "172.16.7.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Executing query INSERT INTO test_table VALUES (1,5), (2,10) on instance run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/ECcert.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/ECcert.crt https://localhost:8443/] Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stdout:Ok. run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/first.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/first.crt https://localhost:8443/] http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Exitcode:60 [gw6] PASSED test_reload_certificate/test.py::test_ECcert_reload run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/cert.xml << EOF\n\n 8443\n \n \n /etc/clickhouse-server/config.d/first.crt\n /etc/clickhouse-server/config.d/first.key\n true\n true\n sslv2,sslv3\n true\n \n \n \n\nEOF'] test_reload_certificate/test.py::test_cert_with_pass_phrase Command:[docker exec roottestreloadcertificate-gw6-node-1 bash -c cat > /etc/clickhouse-server/config.d/cert.xml << EOF 8443 /etc/clickhouse-server/config.d/first.crt /etc/clickhouse-server/config.d/first.key true true sslv2,sslv3 true EOF] Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stderr:time="2025-04-02T03:19:19Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestprometheusprotocols-gw2_default Creating Stderr: Network roottestprometheusprotocols-gw2_default Created Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Creating Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Creating Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Created Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Created Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Starting Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Starting Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Started Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Started Stderr:time="2025-04-02T03:19:21Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:19:21Z" level=debug msg="otel error" error="" Trying to connect to Prometheus... get_instance_ip instance_name=prometheus_reader http://localhost:None "GET /v1.46/containers/roottestprometheusprotocols-gw2-prometheus_reader-1/json HTTP/1.1" 200 None get_instance_ip instance_name=prometheus_writer http://localhost:None "GET /v1.46/containers/roottestprometheusprotocols-gw2-prometheus_writer-1/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.10.3:9091 HTTPConnectionPool(host='172.16.10.3', port=9091): Max retries exceeded with url: /api/v1/query?query=time() (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Attempt 1 failed, retrying in 2 seconds Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Executing query CREATE USER A on instance http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Executing query CREATE USER B on instance run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/first.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/first.crt https://localhost:8443/] Can't connect to Postgres connection to server at "172.16.7.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Stdout:Ok. run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/WithPassPhrase.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/WithPassPhrase.crt https://localhost:8443/] Connecting to 172.16.4.2(172.16.4.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.5.2(172.16.5.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Exitcode:60 run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/cert.xml << EOF\n\n 8443\n \n \n /etc/clickhouse-server/config.d/WithPassPhrase.crt\n /etc/clickhouse-server/config.d/WithPassPhrase.key\n true\n true\n sslv2,sslv3\n true\n \n KeyFileHandler\n \n test\n \n \n\n \n \n\nEOF'] Command:[docker exec roottestreloadcertificate-gw6-node-1 bash -c cat > /etc/clickhouse-server/config.d/cert.xml << EOF 8443 /etc/clickhouse-server/config.d/WithPassPhrase.crt /etc/clickhouse-server/config.d/WithPassPhrase.key true true sslv2,sslv3 true KeyFileHandler test EOF] Executing query CREATE ROLE R1 on instance http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None Executing query GRANT SELECT ON test_table TO R1 on instance http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/WithPassPhrase.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/WithPassPhrase.crt https://localhost:8443/] http://localhost:None "GET /v1.46/containers/04cfce8affa1ffde0d362ad820f4aa146d2c2f4690a4279ea4a1b488a3df01a9/json HTTP/1.1" 200 None ClickHouse node started run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml] Stdout:Ok. run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/first.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/first.crt https://localhost:8443/] Executing query SYSTEM RELOAD CONFIG on node Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Executing query SELECT * FROM test_table on instance Zookeeper connection established, state: CONNECTED Postgres Started ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/.env --project-name roottestpostgresqldatabaseengine-gw0 --file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/.env --project-name roottestpostgresqldatabaseengine-gw0 --file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml up -d --no-recreate] Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Exitcode:60 [gw6] PASSED test_reload_certificate/test.py::test_cert_with_pass_phrase test_reload_certificate/test.py::test_chain_reload run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/cert.xml << EOF\n\n 8443\n \n \n /etc/clickhouse-server/config.d/first.crt\n /etc/clickhouse-server/config.d/first.key\n true\n true\n sslv2,sslv3\n true\n \n \n \n\nEOF'] Command:[docker exec roottestreloadcertificate-gw6-node-1 bash -c cat > /etc/clickhouse-server/config.d/cert.xml << EOF 8443 /etc/clickhouse-server/config.d/first.crt /etc/clickhouse-server/config.d/first.key true true sslv2,sslv3 true EOF] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.2.2, port:2181, use_ssl:False Executing query SYSTEM RELOAD CONFIG on node Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.2.3, port:2181, use_ssl:False Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Executing query SELECT getSetting('max_memory_usage') on node Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query GRANT R1 TO A on instance run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/first.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/first.crt https://localhost:8443/] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/.env --project-name roottestrestorereplica-gw8 --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/docker-compose.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/.env --project-name roottestrestorereplica-gw8 --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/docker-compose.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/docker-compose.yml up -d --no-recreate] Stdout:Ok. run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/WithChain.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/WithChain.crt https://localhost:8443/] Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Running Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Creating Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Created Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Starting Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestpostgresqldatabaseengine-gw0-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestpostgresqldatabaseengine-gw0-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.7.3... http://localhost:None "GET /v1.46/containers/roottestpostgresqldatabaseengine-gw0-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Exitcode:60 run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/cert.xml << EOF\n\n 8443\n \n \n /etc/clickhouse-server/config.d/WithChain.crt\n /etc/clickhouse-server/config.d/WithChain.key\n true\n true\n sslv2,sslv3\n true\n \n \n \n\nEOF'] Command:[docker exec roottestreloadcertificate-gw6-node-1 bash -c cat > /etc/clickhouse-server/config.d/cert.xml << EOF 8443 /etc/clickhouse-server/config.d/WithChain.crt /etc/clickhouse-server/config.d/WithChain.key true true sslv2,sslv3 true EOF] Executing query GRANT R1 TO B on instance Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Executing query SELECT getSetting('load_balancing') on node http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Executing query SELECT getSetting('alter_sync') on node http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Executing query SELECT * FROM test_table on instance run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/WithChain.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/WithChain.crt https://localhost:8443/] http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Stdout:Ok. run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/first.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/first.crt https://localhost:8443/] Exitcode:60 run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['bash', '-c', "openssl s_client -showcerts -servername localhost -connect localhost:8443 /dev/null | grep 'BEGIN CERTIFICATE' | wc -l"] Command:[docker exec roottestreloadcertificate-gw6-node-1 bash -c openssl s_client -showcerts -servername localhost -connect localhost:8443 /dev/null | grep 'BEGIN CERTIFICATE' | wc -l] Stderr: Container roottestrestorereplica-gw8-zoo3-1 Running Stderr: Container roottestrestorereplica-gw8-zoo1-1 Running Stderr: Container roottestrestorereplica-gw8-zoo2-1 Running Stderr: Container roottestrestorereplica-gw8-replica3-1 Creating Stderr: Container roottestrestorereplica-gw8-replica1-1 Creating Stderr: Container roottestrestorereplica-gw8-replica2-1 Creating Stderr: Container roottestrestorereplica-gw8-replica2-1 Created Stderr: Container roottestrestorereplica-gw8-replica1-1 Created Stderr: Container roottestrestorereplica-gw8-replica3-1 Created Stderr: Container roottestrestorereplica-gw8-replica1-1 Starting Stderr: Container roottestrestorereplica-gw8-replica3-1 Starting Stderr: Container roottestrestorereplica-gw8-replica2-1 Starting Stderr: Container roottestrestorereplica-gw8-replica1-1 Started Stderr: Container roottestrestorereplica-gw8-replica2-1 Started Stderr: Container roottestrestorereplica-gw8-replica3-1 Started ClickHouse instance created get_instance_ip instance_name=replica1 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-replica1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=replica1 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-replica1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in replica1, ip: 172.16.2.5... http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-replica1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Executing query GRANT R1 TO A WITH ADMIN OPTION on instance Stdout:2 [gw6] PASSED test_reload_certificate/test.py::test_chain_reload test_reload_certificate/test.py::test_first_than_second_cert run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/cert.xml << EOF\n\n 8443\n \n \n /etc/clickhouse-server/config.d/first.crt\n /etc/clickhouse-server/config.d/first.key\n true\n true\n sslv2,sslv3\n true\n \n \n \n\nEOF'] Command:[docker exec roottestreloadcertificate-gw6-node-1 bash -c cat > /etc/clickhouse-server/config.d/cert.xml << EOF 8443 /etc/clickhouse-server/config.d/first.crt /etc/clickhouse-server/config.d/first.key true true sslv2,sslv3 true EOF] Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query SYSTEM RELOAD CONFIG on node run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjIwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+bmVhcmVzdF9ob3N0bmFtZTwvbG9hZF9iYWxhbmNpbmc+CiAgICAgICAgICAgIDxyZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+MDwvcmVwbGljYXRpb25fYWx0ZXJfcGFydGl0aW9uc19zeW5jPgogICAgICAgIDwvZGVmYXVsdD4KICAgIDwvcHJvZmlsZXM+CjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjIwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+bmVhcmVzdF9ob3N0bmFtZTwvbG9hZF9iYWxhbmNpbmc+CiAgICAgICAgICAgIDxyZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+MDwvcmVwbGljYXRpb25fYWx0ZXJfcGFydGl0aW9uc19zeW5jPgogICAgICAgIDwvZGVmYXVsdD4KICAgIDwvcHJvZmlsZXM+CjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/users.d/z.xml] http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Executing query GRANT R1 TO B on instance run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/first.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/first.crt https://localhost:8443/] http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.10.3:9091 http://172.16.10.3:9091 "GET /api/v1/query?query=time() HTTP/1.1" 200 104 http://172.16.10.3:9091/api/v1/query?query=time() is available after 2.004345655441284 seconds Starting new HTTP connection (1): 172.16.10.2:9090 http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None http://172.16.10.2:9090 "GET /api/v1/query?query=time() HTTP/1.1" 200 104 http://172.16.10.2:9090/api/v1/query?query=time() is available after 0.003067493438720703 seconds ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/.env --project-name roottestprometheusprotocols-gw2 --file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_prometheus.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/.env --project-name roottestprometheusprotocols-gw2 --file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_prometheus.yml up -d --no-recreate] Executing query SELECT getSetting('max_memory_usage') on node Stdout:Ok. run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/second.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/second.crt https://localhost:8443/] Exitcode:60 run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat > /etc/clickhouse-server/config.d/cert.xml << EOF\n\n 8443\n \n \n /etc/clickhouse-server/config.d/second.crt\n /etc/clickhouse-server/config.d/second.key\n true\n true\n sslv2,sslv3\n true\n \n \n \n\nEOF'] Command:[docker exec roottestreloadcertificate-gw6-node-1 bash -c cat > /etc/clickhouse-server/config.d/cert.xml << EOF 8443 /etc/clickhouse-server/config.d/second.crt /etc/clickhouse-server/config.d/second.key true true sslv2,sslv3 true EOF] http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None Connecting to 172.16.4.2(172.16.4.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Executing query SYSTEM RELOAD CONFIG on node Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query SELECT * FROM test_table on instance http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestreloadclustersconfig-gw7-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.4.4, port:2181, use_ssl:False Connecting to 172.16.4.4(172.16.4.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Executing query SELECT getSetting('load_balancing') on node Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestreloadclustersconfig-gw7-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.4.3, port:2181, use_ssl:False http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None Connecting to 172.16.4.3(172.16.4.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Running Stderr: Container roottestprometheusprotocols-gw2-node-1 Creating Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Running Stderr: Container roottestprometheusprotocols-gw2-node-1 Created Stderr: Container roottestprometheusprotocols-gw2-node-1 Starting Stderr: Container roottestprometheusprotocols-gw2-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestprometheusprotocols-gw2-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestprometheusprotocols-gw2-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.10.4... run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/second.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/second.crt https://localhost:8443/] http://localhost:None "GET /v1.46/containers/roottestprometheusprotocols-gw2-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None Stdout:Ok. run container_id:roottestreloadcertificate-gw6-node-1 detach:False nothrow:False cmd: ['curl', '--silent', '--cacert', '/etc/clickhouse-server/config.d/first.crt', 'https://localhost:8443/'] Command:[docker exec roottestreloadcertificate-gw6-node-1 curl --silent --cacert /etc/clickhouse-server/config.d/first.crt https://localhost:8443/] http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/.env --project-name roottestreloadclustersconfig-gw7 --file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/.env --project-name roottestreloadclustersconfig-gw7 --file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate] Executing query DROP USER IF EXISTS A, B on instance [gw1] PASSED test_role/test.py::test_admin_option http://localhost:None "GET /v1.46/containers/6d7fd6dba34a75f7668f749c3b432b5f6475fb938413bacac6bf92b15ba26701/json HTTP/1.1" 200 None ClickHouse node1 started http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None Connecting to 172.16.5.2(172.16.5.2):2181, use_ssl: False Exitcode:60 Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/.env --project-name roottestreloadcertificate-gw6 --file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/docker-compose.yml stop --timeout 20] [gw6] PASSED test_reload_certificate/test.py::test_first_than_second_cert Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Executing query SELECT getSetting('alter_sync') on node Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Executing query drop database if exists pg on node1 Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.5.3, port:2181, use_ssl:False Connecting to 172.16.5.3(172.16.5.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None Executing query DROP ROLE IF EXISTS R1, R2, R3, R4 on instance http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.5.4, port:2181, use_ssl:False Connecting to 172.16.5.4(172.16.5.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query create database pg engine = PostgreSQL(postgres1) on node1 http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None [gw4] PASSED test_reloading_settings_from_users_xml/test.py::test_force_reload test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml] http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Running Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Running Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Running Stderr: Container roottestreloadclustersconfig-gw7-node-1 Creating Stderr: Container roottestreloadclustersconfig-gw7-node-1 Created Stderr: Container roottestreloadclustersconfig-gw7-node-1 Starting Failed connecting to Zookeeper within the connection retry policy. Stderr: Container roottestreloadclustersconfig-gw7-node-1 Started Zookeeper session closed, state: CLOSED ClickHouse instance created get_instance_ip instance_name=node All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/.env --project-name roottestreplicationcredentials-gw9 --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node3/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node4/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node5/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node6/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node7/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node8/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/.env --project-name roottestreplicationcredentials-gw9 --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node3/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node4/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node5/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node6/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node7/docker-compose.yml --file /ClickHouse/tests/integration/test_replication_credentials/_instances-0-gw9/node8/docker-compose.yml up -d --no-recreate] http://localhost:None "GET /v1.46/containers/roottestreloadclustersconfig-gw7-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/roottestreloadclustersconfig-gw7-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.4.5... http://localhost:None "GET /v1.46/containers/roottestreloadclustersconfig-gw7-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None Executing query CREATE USER A on instance test_role/test.py::test_changing_default_roles_affects_new_sessions_only http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query show create table pg.test on node1 http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/bf373494068562e60de27fdf717b629d4f3e8fdd3a4b010f98b229a13a150e70/json HTTP/1.1" 200 None ClickHouse replica1 started get_instance_ip instance_name=replica2 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-replica2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=replica2 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-replica2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in replica2, ip: 172.16.2.7... http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-replica2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/15a9169250b6811e5d58e4e92ce22f46deba044910af939ec7a849f2bd8e5921/json HTTP/1.1" 200 None ClickHouse replica2 started get_instance_ip instance_name=replica3 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-replica3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=replica3 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-replica3-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in replica3, ip: 172.16.2.6... Executing query SELECT getSetting('max_memory_usage') on node http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-replica3-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/7ddd68df885a08d3304e5048bec01d403994ae4a031eb1553e2fa2f76a866066/json HTTP/1.1" 200 None ClickHouse replica3 started Executing query DROP TABLE IF EXISTS test SYNC on replica1 http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None Executing query CREATE ROLE R1, R2 on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None Executing query detach table pg.test on node1 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS test SYNC on replica2 http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT getSetting('load_balancing') on node http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/7c940a7701126895a5fe80cf0019cf5d98202424d2c9871cc0665728ca9505a5/json HTTP/1.1" 200 None ClickHouse node started Executing query CREATE TABLE prometheus (id UInt64) ENGINE=TimeSeries on node Executing query attach table pg.test on node1 Executing query GRANT R1, R2 TO A on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT getSetting('alter_sync') on node Stderr: Container roottestreplicationcredentials-gw9-zoo1-1 Running Stderr: Container roottestreplicationcredentials-gw9-zoo2-1 Running Stderr: Container roottestreplicationcredentials-gw9-zoo3-1 Running Stderr: Container roottestreplicationcredentials-gw9-node8-1 Creating Stderr: Container roottestreplicationcredentials-gw9-node3-1 Creating Stderr: Container roottestreplicationcredentials-gw9-node7-1 Creating Stderr: Container roottestreplicationcredentials-gw9-node1-1 Creating Stderr: Container roottestreplicationcredentials-gw9-node6-1 Creating Stderr: Container roottestreplicationcredentials-gw9-node5-1 Creating Stderr: Container roottestreplicationcredentials-gw9-node2-1 Creating Stderr: Container roottestreplicationcredentials-gw9-node4-1 Creating Stderr: Container roottestreplicationcredentials-gw9-node3-1 Created Stderr: Container roottestreplicationcredentials-gw9-node1-1 Created Stderr: Container roottestreplicationcredentials-gw9-node7-1 Created Stderr: Container roottestreplicationcredentials-gw9-node5-1 Created Stderr: Container roottestreplicationcredentials-gw9-node2-1 Created Stderr: Container roottestreplicationcredentials-gw9-node8-1 Created Stderr: Container roottestreplicationcredentials-gw9-node6-1 Created Stderr: Container roottestreplicationcredentials-gw9-node4-1 Created Stderr: Container roottestreplicationcredentials-gw9-node8-1 Starting Stderr: Container roottestreplicationcredentials-gw9-node4-1 Starting Stderr: Container roottestreplicationcredentials-gw9-node2-1 Starting Stderr: Container roottestreplicationcredentials-gw9-node5-1 Starting Stderr: Container roottestreplicationcredentials-gw9-node7-1 Starting Stderr: Container roottestreplicationcredentials-gw9-node3-1 Starting Stderr: Container roottestreplicationcredentials-gw9-node1-1 Starting Stderr: Container roottestreplicationcredentials-gw9-node6-1 Starting Stderr: Container roottestreplicationcredentials-gw9-node6-1 Started Stderr: Container roottestreplicationcredentials-gw9-node4-1 Started Stderr: Container roottestreplicationcredentials-gw9-node5-1 Started Stderr: Container roottestreplicationcredentials-gw9-node3-1 Started Stderr: Container roottestreplicationcredentials-gw9-node8-1 Started Stderr: Container roottestreplicationcredentials-gw9-node1-1 Started Stderr: Container roottestreplicationcredentials-gw9-node7-1 Started Stderr: Container roottestreplicationcredentials-gw9-node2-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.5.12... http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS test SYNC on replica3 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query show create table pg.test on node1 http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Executing query SHOW CURRENT ROLES on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.3.2:8123 "GET /?session_id=session+%231&query=SHOW+CURRENT+ROLES HTTP/1.1" 200 None Executing query SET DEFAULT ROLE R2 TO A on instance Starting new HTTP connection (1): 172.16.10.2:9090 http://172.16.10.2:9090 "GET /api/v1/query?query=up&time=1743563965.416591 HTTP/1.1" 200 87 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query CREATE TABLE test(n UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', 'replica1') ORDER BY n PARTITION BY n % 10; on replica1 http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None [gw0] PASSED test_postgresql_database_engine/test.py::test_datetime test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL('postgres1:5432', 'postgres_database', 'postgres', 'mysecretpassword') on node1 Executing query SHOW CURRENT ROLES on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.3.2:8123 "GET /?session_id=session+%231&query=SHOW+CURRENT+ROLES HTTP/1.1" 200 None Executing query SHOW CURRENT ROLES on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.3.2:8123 "GET /?session_id=session+%232&query=SHOW+CURRENT+ROLES HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query DROP USER IF EXISTS A, B on instance [gw1] PASSED test_role/test.py::test_changing_default_roles_affects_new_sessions_only http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Executing query CREATE TABLE test(n UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', 'replica2') ORDER BY n PARTITION BY n % 10; on replica2 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query DETACH TABLE postgres_database.array_columns on node1 http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Executing query DROP ROLE IF EXISTS R1, R2, R3, R4 on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Executing query ATTACH TABLE postgres_database.array_columns on node1 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None test_role/test.py::test_combine_privileges Executing query CREATE USER A on instance http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Executing query CREATE TABLE test(n UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', 'replica3') ORDER BY n PARTITION BY n % 10; on replica3 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Executing query INSERT INTO postgres_database.array_columns VALUES ([[[1, 1], [1, 1]], [[3, 3], [3, 3]], [[4, 4], [5, 5]]], [[[1, NULL], [NULL, 1]], [[NULL, NULL], [NULL, NULL]], [[4, 4], [5, 5]]] ) on node1 Executing query CREATE ROLE R1 on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.2.4, port:2181, use_ssl:False http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Executing query DROP TABLE IF EXISTS test SYNC on replica1 http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Executing query SELECT * FROM postgres_database.array_columns on node1 Executing query CREATE ROLE R2 on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.10.3:9091 run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjIwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+bmVhcmVzdF9ob3N0bmFtZTwvbG9hZF9iYWxhbmNpbmc+CiAgICAgICAgICAgIDxyZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+MDwvcmVwbGljYXRpb25fYWx0ZXJfcGFydGl0aW9uc19zeW5jPgogICAgICAgIDwvZGVmYXVsdD4KICAgIDwvcHJvZmlsZXM+CjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjIwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+bmVhcmVzdF9ob3N0bmFtZTwvbG9hZF9iYWxhbmNpbmc+CiAgICAgICAgICAgIDxyZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+MDwvcmVwbGljYXRpb25fYWx0ZXJfcGFydGl0aW9uc19zeW5jPgogICAgICAgIDwvZGVmYXVsdD4KICAgIDwvcHJvZmlsZXM+CjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/users.d/z.xml] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS test SYNC on replica2 http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None Executing query SELECT getSetting('max_memory_usage') on node Executing query SELECT * FROM test_table on instance http://172.16.10.3:9091 "GET /api/v1/query?query=up&time=1743563965.416591 HTTP/1.1" 200 87 Executing query DROP TABLE IF EXISTS prometheus SYNC on node [gw2] PASSED test_prometheus_protocols/test.py::test_64bit_id Executing query DROP DATABASE postgres_database on node1 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/5b1874cc9120e0f8db912124de627fb85e116a8fb3cea05617e6177d56de0639/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.5.8... http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/76971a0644beeaf4601ebbc13765f82e9a9dcdcc42ef1e82d1ee08cea1cec209/json HTTP/1.1" 200 None ClickHouse node2 started get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node3-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node3, ip: 172.16.5.10... http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node3-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/4d686d5cedfd08ae1561efe588d888f290cd7d5ac3fc2d975f6de8195e4cb76b/json HTTP/1.1" 200 None ClickHouse node3 started get_instance_ip instance_name=node4 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node4-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node4 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node4-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node4, ip: 172.16.5.5... http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node4-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/8d61da1a90d734c5f777debda38d7c09de2019fca1c527264fb03d295701ad59/json HTTP/1.1" 200 None ClickHouse node4 started get_instance_ip instance_name=node5 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node5-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node5 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node5-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node5, ip: 172.16.5.11... http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node5-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/5c3acbcffc8826882daad094f47712031e134d461c9738f0181759c94b5ba85e/json HTTP/1.1" 200 None ClickHouse node5 started get_instance_ip instance_name=node6 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node6-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node6 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node6-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node6, ip: 172.16.5.7... http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node6-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/4649a5d38d15de077158f24144b9c13d57b9f1fd83baef12c0ab51b545fe4fac/json HTTP/1.1" 200 None ClickHouse node6 started get_instance_ip instance_name=node7 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node7-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node7 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node7-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node7, ip: 172.16.5.9... http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node7-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f8d046e842db18db5c0efbd6f0a8e3f2128c914a9024eac56f164cfd37498808/json HTTP/1.1" 200 None ClickHouse node7 started get_instance_ip instance_name=node8 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node8-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node8 http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node8-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node8, ip: 172.16.5.6... http://localhost:None "GET /v1.46/containers/roottestreplicationcredentials-gw9-node8-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b226d563ef23b4ced656c02026fd38ffee5d14f70bd239db8b8174049816c516/json HTTP/1.1" 200 None ClickHouse node8 started Executing query CREATE DATABASE test; CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test4/replicated', 'node7') PARTITION BY toYYYYMM(date) ORDER BY id; on node7 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS test SYNC on replica3 Executing query GRANT R1 TO A on instance Executing query DROP TABLE IF EXISTS original SYNC on node Executing query SHOW DATABASES on node1 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query GRANT SELECT(x) ON test_table TO R1 on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query CREATE TABLE test(n UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', 'replica1') ORDER BY n PARTITION BY n % 10; on replica1 Executing query CREATE DATABASE test; CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test4/replicated', 'node8') PARTITION BY toYYYYMM(date) ORDER BY id; on node8 Executing query DROP TABLE IF EXISTS mydata SYNC on node [gw0] PASSED test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL('google.com:5432', 'dummy', 'dummy', 'dummy') on node1 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT * FROM test_table on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS mytable SYNC on node Executing query SHOW DATABASES on node1 Executing query CREATE TABLE test(n UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', 'replica2') ORDER BY n PARTITION BY n % 10; on replica2 Executing query insert into test_table values ('2017-06-21', 111, 0) on node7 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT getSetting('max_memory_usage') on node http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT x FROM test_table on instance Executing query DROP TABLE IF EXISTS mymetrics SYNC on node Executing query SELECT DISTINCT(name) FROM system.tables WHERE engine!='PostgreSQL' AND name='COLUMNS' on node1 Executing query CREATE TABLE test(n UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', 'replica3') ORDER BY n PARTITION BY n % 10; on replica3 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Connecting to 172.16.9.3(172.16.9.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.6.4, port:2181, use_ssl:False Failed connecting to Zookeeper within the connection retry policy. Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.9.2, port:2181, use_ssl:False Connecting to 172.16.9.2(172.16.9.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Zookeeper connection established, state: CONNECTED Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query GRANT SELECT(y) ON test_table TO R2 on instance test_prometheus_protocols/test.py::test_create_as_table Executing query CREATE TABLE original ENGINE=TimeSeries on node Stderr: Container roottestreloadcertificate-gw6-node-1 Stopping Stderr: Container roottestreloadcertificate-gw6-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/.env --project-name roottestreloadcertificate-gw6 --file /ClickHouse/tests/integration/test_reload_certificate/_instances-0-gw6/node/docker-compose.yml down --volumes] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.9.4, port:2181, use_ssl:False Connecting to 172.16.9.4(172.16.9.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.6.2, port:2181, use_ssl:False Connecting to 172.16.6.2(172.16.6.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query SELECT sum(n), count() FROM test on replica1 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') Trying to create Minio instance by command docker compose --project-name roottestremoteblobsnamingbackwardcompatibility-gw3 --env-file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d Command:[docker compose --project-name roottestremoteblobsnamingbackwardcompatibility-gw3 --env-file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') Trying to create Minio instance by command docker compose --project-name roottests3cluster-gw5 --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d Command:[docker compose --project-name roottests3cluster-gw5 --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d] Executing query GRANT R2 TO A on instance Executing query CREATE TABLE prometheus AS original on node http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT sum(n), count() FROM test on replica2 Executing query SELECT * FROM test_table on instance Executing query SELECT getSetting('max_memory_usage') on node http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.10.2:9090 http://172.16.10.2:9090 "GET /api/v1/query?query=up&time=1743563968.038509 HTTP/1.1" 200 161 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query DROP USER IF EXISTS A, B on instance [gw1] PASSED test_role/test.py::test_combine_privileges Stderr: Container roottestreloadcertificate-gw6-node-1 Stopping Stderr: Container roottestreloadcertificate-gw6-node-1 Stopped Stderr: Container roottestreloadcertificate-gw6-node-1 Removing Stderr: Container roottestreloadcertificate-gw6-node-1 Removed Stderr: Network roottestreloadcertificate-gw6_default Removing Stderr: Network roottestreloadcertificate-gw6_default Removed Cleanup called Docker networks for project roottestreloadcertificate-gw6 are NETWORK ID NAME DRIVER SCOPE Executing query SELECT sum(n), count() FROM test on replica3 Docker containers for project roottestreloadcertificate-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Docker volumes for project roottestreloadcertificate-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreloadcertificate-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreloadcertificate-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:8 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 8 test_recompression_ttl/test.py::test_recompression_multiple_ttls Running tests in /ClickHouse/tests/integration/test_recompression_ttl/test.py Cluster start called. is_up=False http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Docker networks for project roottestrecompressionttl-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrecompressionttl-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrecompressionttl-gw6 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestrecompressionttl-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrecompressionttl-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query SELECT id FROM test_table order by id on node7 Docker volumes for project roottestrecompressionttl-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrecompressionttl-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Unstopped containers: {} No running containers for project: roottestrecompressionttl-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:8 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 8 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_recompression_ttl/configs/background_pool_config.xml'] to /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node1/database Setup logs dir /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_recompression_ttl/configs/background_pool_config.xml'] to /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node2/database Setup logs dir /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/.env --project-name roottestrecompressionttl-gw6 --file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node2/docker-compose.yml pull] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT getSetting('max_memory_usage') on node Executing query DROP ROLE IF EXISTS R1, R2, R3, R4 on instance Executing query INSERT INTO test SELECT number + 0 FROM numbers(200) on replica1 Executing query SELECT id FROM test_table order by id on node8 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Stderr:time="2025-04-02T03:19:27Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Volume "roottestremoteblobsnamingbackwardcompatibility-gw3_data1-1" Creating Stderr: Volume "roottestremoteblobsnamingbackwardcompatibility-gw3_data1-1" Created Stderr:time="2025-04-02T03:19:27Z" level=warning msg="Found orphan containers ([roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up." Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Started Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Started Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Started Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Started Stderr:time="2025-04-02T03:19:28Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:19:28Z" level=debug msg="otel error" error="" Trying to connect to Minio... get_instance_ip instance_name=minio1 http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=proxy1 http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.9.7:9001 Incremented Retry for (url='/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (2): 172.16.9.7:9001 Incremented Retry for (url='/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (3): 172.16.9.7:9001 Incremented Retry for (url='/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (4): 172.16.9.7:9001 Can't connect to Minio: HTTPConnectionPool(host='172.16.9.7', port=9001): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Executing query CREATE USER A on instance test_role/test.py::test_create_role Executing query SELECT getSetting('load_balancing') on node Executing query insert into test_table values ('2017-06-22', 222, 1) on node8 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.10.3:9091 Stderr:time="2025-04-02T03:19:27Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Volume "roottests3cluster-gw5_data1-1" Creating Stderr: Volume "roottests3cluster-gw5_data1-1" Created Stderr:time="2025-04-02T03:19:27Z" level=warning msg="Found orphan containers ([roottests3cluster-gw5-zoo3-1 roottests3cluster-gw5-zoo2-1 roottests3cluster-gw5-zoo1-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up." Stderr: Container roottests3cluster-gw5-proxy2-1 Creating Stderr: Container roottests3cluster-gw5-proxy1-1 Creating Stderr: Container roottests3cluster-gw5-proxy2-1 Created Stderr: Container roottests3cluster-gw5-proxy1-1 Created Stderr: Container roottests3cluster-gw5-minio1-1 Creating Stderr: Container roottests3cluster-gw5-resolver-1 Creating Stderr: Container roottests3cluster-gw5-minio1-1 Created Stderr: Container roottests3cluster-gw5-resolver-1 Created Stderr: Container roottests3cluster-gw5-proxy1-1 Starting Stderr: Container roottests3cluster-gw5-proxy2-1 Starting Stderr: Container roottests3cluster-gw5-proxy1-1 Started Stderr: Container roottests3cluster-gw5-proxy2-1 Started Stderr: Container roottests3cluster-gw5-minio1-1 Starting Stderr: Container roottests3cluster-gw5-resolver-1 Starting Stderr: Container roottests3cluster-gw5-minio1-1 Started Stderr: Container roottests3cluster-gw5-resolver-1 Started Stderr:time="2025-04-02T03:19:29Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:19:29Z" level=debug msg="otel error" error="" Trying to connect to Minio... get_instance_ip instance_name=minio1 http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-minio1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=proxy1 http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-proxy1-1/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.6.8:9001 Incremented Retry for (url='/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (2): 172.16.6.8:9001 Incremented Retry for (url='/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (3): 172.16.6.8:9001 Incremented Retry for (url='/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (4): 172.16.6.8:9001 Can't connect to Minio: HTTPConnectionPool(host='172.16.6.8', port=9001): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query CREATE ROLE R1 on instance Executing query INSERT INTO test SELECT number + 200 FROM numbers(200) on replica1 http://172.16.10.3:9091 "GET /api/v1/query?query=up&time=1743563968.038509 HTTP/1.1" 200 87 Starting new HTTP connection (1): 172.16.10.2:9090 http://172.16.10.2:9090 "GET /api/v1/query?query=up&time=1743563968.038509 HTTP/1.1" 200 161 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT getSetting('alter_sync') on node Executing query SELECT * FROM test_table on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None [gw4] PASSED test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml] Executing query INSERT INTO test SELECT number + 400 FROM numbers(200) on replica1 Executing query GRANT SELECT ON test_table TO R1 on instance Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT * FROM test_table on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPmE8L21heF9tZW1vcnlfdXNhZ2U+CiAgICAgICAgICAgIDxsb2FkX2JhbGFuY2luZz5uZWFyZXN0X2hvc3RuYW1lPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4wPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPmE8L21heF9tZW1vcnlfdXNhZ2U+CiAgICAgICAgICAgIDxsb2FkX2JhbGFuY2luZz5uZWFyZXN0X2hvc3RuYW1lPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4wPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml] Executing query INSERT INTO test SELECT number + 600 FROM numbers(200) on replica1 Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query GRANT R1 TO A on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Starting new HTTP connection (5): 172.16.9.7:9001 http://172.16.9.7:9001 "GET / HTTP/1.1" 200 0 Connected to Minio. http://172.16.9.7:9001 "GET /root?location= HTTP/1.1" 404 0 http://172.16.9.7:9001 "PUT /root HTTP/1.1" 200 0 S3 bucket 'root' created http://172.16.9.7:9001 "GET /root2?location= HTTP/1.1" 404 0 http://172.16.9.7:9001 "PUT /root2 HTTP/1.1" 200 0 S3 bucket 'root2' created ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env --project-name roottestremoteblobsnamingbackwardcompatibility-gw3 --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/docker-compose.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env --project-name roottestremoteblobsnamingbackwardcompatibility-gw3 --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/docker-compose.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/docker-compose.yml up -d --no-recreate] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT * FROM test_table on instance Executing query INSERT INTO test SELECT number + 800 FROM numbers(200) on replica1 Starting new HTTP connection (5): 172.16.6.8:9001 http://172.16.6.8:9001 "GET / HTTP/1.1" 200 0 Connected to Minio. http://172.16.6.8:9001 "GET /root?location= HTTP/1.1" 404 0 http://172.16.6.8:9001 "PUT /root HTTP/1.1" 200 0 S3 bucket 'root' created http://172.16.6.8:9001 "GET /root2?location= HTTP/1.1" 404 0 http://172.16.6.8:9001 "PUT /root2 HTTP/1.1" 200 0 S3 bucket 'root2' created ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env --project-name roottests3cluster-gw5 --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env --project-name roottests3cluster-gw5 --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/docker-compose.yml up -d --no-recreate] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT getSetting('max_memory_usage') on node Starting new HTTP connection (1): 172.16.10.3:9091 Executing query SELECT id FROM test_table order by id on node7 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query REVOKE R1 FROM A on instance http://172.16.10.3:9091 "GET /api/v1/query?query=up&time=1743563968.038509 HTTP/1.1" 200 87 Starting new HTTP connection (1): 172.16.10.2:9090 http://172.16.10.2:9090 "GET /api/v1/query?query=up&time=1743563968.038509 HTTP/1.1" 200 161 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT getSetting('load_balancing') on node Executing query SELECT sum(n), count() FROM test on replica1 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT * FROM test_table on instance Executing query SELECT id FROM test_table order by id on node8 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Running Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Running Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Running Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Running Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Running Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Running Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Running Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Creating Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Created Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Starting Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Started Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Started Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.9.11... Executing query SELECT getSetting('alter_sync') on node http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT sum(n), count() FROM test on replica2 http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query DROP USER IF EXISTS A, B on instance [gw1] PASSED test_role/test.py::test_create_role http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None Stderr: Container roottests3cluster-gw5-zoo3-1 Running Stderr: Container roottests3cluster-gw5-proxy1-1 Running Stderr: Container roottests3cluster-gw5-zoo2-1 Running Stderr: Container roottests3cluster-gw5-zoo1-1 Running Stderr: Container roottests3cluster-gw5-s0_1_0-1 Creating Stderr: Container roottests3cluster-gw5-proxy2-1 Running Stderr: Container roottests3cluster-gw5-resolver-1 Running Stderr: Container roottests3cluster-gw5-s0_0_1-1 Creating Stderr: Container roottests3cluster-gw5-minio1-1 Running Stderr: Container roottests3cluster-gw5-s0_0_0-1 Creating Stderr: Container roottests3cluster-gw5-s0_0_1-1 Created Stderr: Container roottests3cluster-gw5-s0_0_0-1 Created Stderr: Container roottests3cluster-gw5-s0_1_0-1 Created Stderr: Container roottests3cluster-gw5-s0_0_1-1 Starting Stderr: Container roottests3cluster-gw5-s0_1_0-1 Starting Stderr: Container roottests3cluster-gw5-s0_0_0-1 Starting Stderr: Container roottests3cluster-gw5-s0_1_0-1 Started Stderr: Container roottests3cluster-gw5-s0_0_1-1 Started Stderr: Container roottests3cluster-gw5-s0_0_0-1 Started ClickHouse instance created get_instance_ip instance_name=s0_0_0 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-s0_0_0-1/json HTTP/1.1" 200 None get_instance_ip instance_name=s0_0_0 http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-s0_0_0-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in s0_0_0, ip: 172.16.6.11... http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-s0_0_0-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None [gw4] PASSED test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml] run container_id:roottestreplicationcredentials-gw9-node7-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n 9009\n \n admin\n 222\n true\n \n \n ' > /etc/clickhouse-server/config.d/credentials1.xml"] Command:[docker exec roottestreplicationcredentials-gw9-node7-1 bash -c echo ' 9009 admin 222 true ' > /etc/clickhouse-server/config.d/credentials1.xml] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Executing query SELECT sum(n), count() FROM test on replica3 Executing query SYSTEM RELOAD CONFIG on node Executing query SYSTEM RELOAD CONFIG on node7 Executing query DROP ROLE IF EXISTS R1, R2, R3, R4 on instance http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPmE8L21heF9tZW1vcnlfdXNhZ2U+CiAgICAgICAgICAgIDxsb2FkX2JhbGFuY2luZz5uZWFyZXN0X2hvc3RuYW1lPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4wPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPmE8L21heF9tZW1vcnlfdXNhZ2U+CiAgICAgICAgICAgIDxsb2FkX2JhbGFuY2luZz5uZWFyZXN0X2hvc3RuYW1lPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4wPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml] Executing query CREATE USER A on instance test_role/test.py::test_function_current_roles http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query insert into test_table values ('2017-06-22', 333, 1) on node7 http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.10.3:9091 Sending request(xid=1): GetChildren(path='/clickhouse/tables/test/replicas/replica2', watcher=None) Received response(xid=1): ['is_active', 'parts', 'metadata', 'log_pointer', 'host', 'is_lost', 'metadata_version', 'columns', 'mutation_pointer', 'queue', 'flags', 'min_unprocessed_insert_time', 'creator_info', 'max_processed_insert_time'] Sending request(xid=2): GetChildren(path='/clickhouse/tables/test/replicas/replica2/is_active', watcher=None) Received response(xid=2): [] Sending request(xid=3): Delete(path='/clickhouse/tables/test/replicas/replica2/is_active', version=-1) Received response(xid=3): True Sending request(xid=4): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts', watcher=None) Received response(xid=4): ['0_0_4_1', '6_3_3_0', '6_2_2_0', '0_4_4_0', '3_1_1_0', '1_1_1_0', '6_0_0_0', '4_0_0_0', '5_0_0_0', '8_0_4_1', '0_0_0_0', '2_3_3_0', '4_2_2_0', '6_0_4_1', '3_0_4_1', '9_2_2_0', '1_2_2_0', '5_1_1_0', '7_3_3_0', '1_4_4_0', '1_3_3_0', '4_3_3_0', '9_0_0_0', '8_4_4_0', '9_1_1_0', '7_1_1_0', '4_0_4_1', '3_2_2_0', '4_4_4_0', '5_0_4_1', '3_4_4_0', '7_0_0_0', '3_3_3_0', '9_4_4_0', '6_1_1_0', '8_0_0_0', '5_2_2_0', '3_0_0_0', '1_0_0_0', '2_1_1_0', '6_4_4_0', '0_2_2_0', '0_3_3_0', '2_4_4_0', '7_0_4_1', '7_4_4_0', '1_0_4_1', '7_2_2_0', '9_0_4_1', '2_0_4_1', '2_0_0_0', '8_2_2_0', '8_3_3_0', '5_3_3_0', '4_1_1_0', '2_2_2_0', '0_1_1_0', '9_3_3_0', '5_4_4_0', '8_1_1_0'] Sending request(xid=5): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/0_0_4_1', watcher=None) Received response(xid=5): [] Sending request(xid=6): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/0_0_4_1', version=-1) Received response(xid=6): True Sending request(xid=7): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/6_3_3_0', watcher=None) Received response(xid=7): [] Sending request(xid=8): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/6_3_3_0', version=-1) Received response(xid=8): True Sending request(xid=9): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/6_2_2_0', watcher=None) Received response(xid=9): [] Sending request(xid=10): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/6_2_2_0', version=-1) Received response(xid=10): True Sending request(xid=11): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/0_4_4_0', watcher=None) Received response(xid=11): [] Sending request(xid=12): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/0_4_4_0', version=-1) Received response(xid=12): True Sending request(xid=13): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/3_1_1_0', watcher=None) Received response(xid=13): [] Sending request(xid=14): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/3_1_1_0', version=-1) Received response(xid=14): True Sending request(xid=15): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/1_1_1_0', watcher=None) Received response(xid=15): [] Sending request(xid=16): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/1_1_1_0', version=-1) Received response(xid=16): True Sending request(xid=17): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/6_0_0_0', watcher=None) Received response(xid=17): [] Sending request(xid=18): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/6_0_0_0', version=-1) Received response(xid=18): True Sending request(xid=19): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/4_0_0_0', watcher=None) Received response(xid=19): [] Sending request(xid=20): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/4_0_0_0', version=-1) Received response(xid=20): True Sending request(xid=21): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/5_0_0_0', watcher=None) Received response(xid=21): [] Sending request(xid=22): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/5_0_0_0', version=-1) http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None Received response(xid=22): True Sending request(xid=23): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/8_0_4_1', watcher=None) Received response(xid=23): [] Sending request(xid=24): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/8_0_4_1', version=-1) Received response(xid=24): True Sending request(xid=25): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/0_0_0_0', watcher=None) Received response(xid=25): [] Sending request(xid=26): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/0_0_0_0', version=-1) Received response(xid=26): True Sending request(xid=27): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/2_3_3_0', watcher=None) Received response(xid=27): [] Sending request(xid=28): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/2_3_3_0', version=-1) Received response(xid=28): True Sending request(xid=29): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/4_2_2_0', watcher=None) Received response(xid=29): [] Sending request(xid=30): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/4_2_2_0', version=-1) Received response(xid=30): True Sending request(xid=31): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/6_0_4_1', watcher=None) Received response(xid=31): [] Sending request(xid=32): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/6_0_4_1', version=-1) http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Received response(xid=32): True Sending request(xid=33): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/3_0_4_1', watcher=None) Received response(xid=33): [] Sending request(xid=34): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/3_0_4_1', version=-1) Received response(xid=34): True Sending request(xid=35): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/9_2_2_0', watcher=None) Received response(xid=35): [] Sending request(xid=36): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/9_2_2_0', version=-1) http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Received response(xid=36): True Sending request(xid=37): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/1_2_2_0', watcher=None) Received response(xid=37): [] Sending request(xid=38): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/1_2_2_0', version=-1) Received response(xid=38): True Sending request(xid=39): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/5_1_1_0', watcher=None) Received response(xid=39): [] Sending request(xid=40): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/5_1_1_0', version=-1) Received response(xid=40): True Sending request(xid=41): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/7_3_3_0', watcher=None) Received response(xid=41): [] Sending request(xid=42): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/7_3_3_0', version=-1) Received response(xid=42): True Sending request(xid=43): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/1_4_4_0', watcher=None) Received response(xid=43): [] Sending request(xid=44): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/1_4_4_0', version=-1) Received response(xid=44): True Sending request(xid=45): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/1_3_3_0', watcher=None) Received response(xid=45): [] Sending request(xid=46): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/1_3_3_0', version=-1) Received response(xid=46): True Sending request(xid=47): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/4_3_3_0', watcher=None) Received response(xid=47): [] Sending request(xid=48): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/4_3_3_0', version=-1) Received response(xid=48): True Sending request(xid=49): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/9_0_0_0', watcher=None) Received response(xid=49): [] Sending request(xid=50): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/9_0_0_0', version=-1) Received response(xid=50): True Sending request(xid=51): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/8_4_4_0', watcher=None) Received response(xid=51): [] Sending request(xid=52): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/8_4_4_0', version=-1) Received response(xid=52): True Sending request(xid=53): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/9_1_1_0', watcher=None) Received response(xid=53): [] Sending request(xid=54): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/9_1_1_0', version=-1) Received response(xid=54): True Sending request(xid=55): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/7_1_1_0', watcher=None) Received response(xid=55): [] Sending request(xid=56): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/7_1_1_0', version=-1) Received response(xid=56): True Sending request(xid=57): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/4_0_4_1', watcher=None) Received response(xid=57): [] Sending request(xid=58): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/4_0_4_1', version=-1) Received response(xid=58): True Sending request(xid=59): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/3_2_2_0', watcher=None) Received response(xid=59): [] Sending request(xid=60): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/3_2_2_0', version=-1) Received response(xid=60): True Sending request(xid=61): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/4_4_4_0', watcher=None) Received response(xid=61): [] Sending request(xid=62): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/4_4_4_0', version=-1) Received response(xid=62): True Sending request(xid=63): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/5_0_4_1', watcher=None) Received response(xid=63): [] Sending request(xid=64): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/5_0_4_1', version=-1) Executing query CREATE ROLE R1, R2, R3, R4 on instance Received response(xid=64): True Sending request(xid=65): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/3_4_4_0', watcher=None) Received response(xid=65): [] Sending request(xid=66): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/3_4_4_0', version=-1) Received response(xid=66): True Sending request(xid=67): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/7_0_0_0', watcher=None) Received response(xid=67): [] Sending request(xid=68): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/7_0_0_0', version=-1) http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None Received response(xid=68): True Sending request(xid=69): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/3_3_3_0', watcher=None) Received response(xid=69): [] Sending request(xid=70): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/3_3_3_0', version=-1) Received response(xid=70): True Sending request(xid=71): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/9_4_4_0', watcher=None) Received response(xid=71): [] Sending request(xid=72): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/9_4_4_0', version=-1) Received response(xid=72): True Sending request(xid=73): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/6_1_1_0', watcher=None) Received response(xid=73): [] Sending request(xid=74): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/6_1_1_0', version=-1) Received response(xid=74): True Sending request(xid=75): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/8_0_0_0', watcher=None) Received response(xid=75): [] Sending request(xid=76): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/8_0_0_0', version=-1) Received response(xid=76): True Sending request(xid=77): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/5_2_2_0', watcher=None) Received response(xid=77): [] Sending request(xid=78): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/5_2_2_0', version=-1) Received response(xid=78): True Sending request(xid=79): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/3_0_0_0', watcher=None) Received response(xid=79): [] Sending request(xid=80): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/3_0_0_0', version=-1) http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Received response(xid=80): True Sending request(xid=81): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/1_0_0_0', watcher=None) Received response(xid=81): [] Sending request(xid=82): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/1_0_0_0', version=-1) Received response(xid=82): True Sending request(xid=83): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/2_1_1_0', watcher=None) Received response(xid=83): [] Sending request(xid=84): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/2_1_1_0', version=-1) Received response(xid=84): True Sending request(xid=85): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/6_4_4_0', watcher=None) Received response(xid=85): [] Sending request(xid=86): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/6_4_4_0', version=-1) http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Received response(xid=86): True Sending request(xid=87): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/0_2_2_0', watcher=None) Received response(xid=87): [] Sending request(xid=88): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/0_2_2_0', version=-1) Received response(xid=88): True Sending request(xid=89): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/0_3_3_0', watcher=None) Received response(xid=89): [] Sending request(xid=90): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/0_3_3_0', version=-1) Received response(xid=90): True Sending request(xid=91): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/2_4_4_0', watcher=None) Received response(xid=91): [] Sending request(xid=92): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/2_4_4_0', version=-1) Received response(xid=92): True Executing query SYSTEM SYNC REPLICA test_table on node8 Sending request(xid=93): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/7_0_4_1', watcher=None) Received response(xid=93): [] Sending request(xid=94): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/7_0_4_1', version=-1) Received response(xid=94): True Sending request(xid=95): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/7_4_4_0', watcher=None) Received response(xid=95): [] Sending request(xid=96): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/7_4_4_0', version=-1) http://172.16.10.3:9091 "GET /api/v1/query?query=up&time=1743563968.038509 HTTP/1.1" 200 161 Executing query DROP TABLE IF EXISTS prometheus SYNC on node [gw2] PASSED test_prometheus_protocols/test.py::test_create_as_table Received response(xid=96): True Sending request(xid=97): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/1_0_4_1', watcher=None) Received response(xid=97): [] Sending request(xid=98): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/1_0_4_1', version=-1) Received response(xid=98): True Sending request(xid=99): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/7_2_2_0', watcher=None) Received response(xid=99): [] Sending request(xid=100): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/7_2_2_0', version=-1) Received response(xid=100): True Sending request(xid=101): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/9_0_4_1', watcher=None) Received response(xid=101): [] Sending request(xid=102): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/9_0_4_1', version=-1) Received response(xid=102): True Sending request(xid=103): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/2_0_4_1', watcher=None) Received response(xid=103): [] Sending request(xid=104): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/2_0_4_1', version=-1) Executing query SELECT getSetting('max_memory_usage') on node Received response(xid=104): True Sending request(xid=105): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/2_0_0_0', watcher=None) Received response(xid=105): [] Sending request(xid=106): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/2_0_0_0', version=-1) Received response(xid=106): True Sending request(xid=107): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/8_2_2_0', watcher=None) Received response(xid=107): [] Sending request(xid=108): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/8_2_2_0', version=-1) http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None Received response(xid=108): True Sending request(xid=109): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/8_3_3_0', watcher=None) Received response(xid=109): [] Sending request(xid=110): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/8_3_3_0', version=-1) Received response(xid=110): True Sending request(xid=111): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/5_3_3_0', watcher=None) Received response(xid=111): [] Sending request(xid=112): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/5_3_3_0', version=-1) Received response(xid=112): True Sending request(xid=113): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/4_1_1_0', watcher=None) Received response(xid=113): [] Sending request(xid=114): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/4_1_1_0', version=-1) Received response(xid=114): True Sending request(xid=115): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/2_2_2_0', watcher=None) Received response(xid=115): [] Sending request(xid=116): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/2_2_2_0', version=-1) http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Received response(xid=116): True Sending request(xid=117): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/0_1_1_0', watcher=None) Received response(xid=117): [] Sending request(xid=118): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/0_1_1_0', version=-1) http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Received response(xid=118): True Sending request(xid=119): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/9_3_3_0', watcher=None) Received response(xid=119): [] Sending request(xid=120): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/9_3_3_0', version=-1) Received response(xid=120): True Sending request(xid=121): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/5_4_4_0', watcher=None) Received response(xid=121): [] Sending request(xid=122): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/5_4_4_0', version=-1) Received response(xid=122): True Sending request(xid=123): GetChildren(path='/clickhouse/tables/test/replicas/replica2/parts/8_1_1_0', watcher=None) Received response(xid=123): [] Sending request(xid=124): Delete(path='/clickhouse/tables/test/replicas/replica2/parts/8_1_1_0', version=-1) Received response(xid=124): True Sending request(xid=125): Delete(path='/clickhouse/tables/test/replicas/replica2/parts', version=-1) Received response(xid=125): True Sending request(xid=126): GetChildren(path='/clickhouse/tables/test/replicas/replica2/metadata', watcher=None) Received response(xid=126): [] Sending request(xid=127): Delete(path='/clickhouse/tables/test/replicas/replica2/metadata', version=-1) Received response(xid=127): True Sending request(xid=128): GetChildren(path='/clickhouse/tables/test/replicas/replica2/log_pointer', watcher=None) Received response(xid=128): [] Sending request(xid=129): Delete(path='/clickhouse/tables/test/replicas/replica2/log_pointer', version=-1) Received response(xid=129): True Sending request(xid=130): GetChildren(path='/clickhouse/tables/test/replicas/replica2/host', watcher=None) Received response(xid=130): [] Sending request(xid=131): Delete(path='/clickhouse/tables/test/replicas/replica2/host', version=-1) Received response(xid=131): True Sending request(xid=132): GetChildren(path='/clickhouse/tables/test/replicas/replica2/is_lost', watcher=None) Received response(xid=132): [] Sending request(xid=133): Delete(path='/clickhouse/tables/test/replicas/replica2/is_lost', version=-1) Received response(xid=133): True Sending request(xid=134): GetChildren(path='/clickhouse/tables/test/replicas/replica2/metadata_version', watcher=None) Received response(xid=134): [] Sending request(xid=135): Delete(path='/clickhouse/tables/test/replicas/replica2/metadata_version', version=-1) Received response(xid=135): True Sending request(xid=136): GetChildren(path='/clickhouse/tables/test/replicas/replica2/columns', watcher=None) Received response(xid=136): [] Sending request(xid=137): Delete(path='/clickhouse/tables/test/replicas/replica2/columns', version=-1) Received response(xid=137): True Sending request(xid=138): GetChildren(path='/clickhouse/tables/test/replicas/replica2/mutation_pointer', watcher=None) Received response(xid=138): [] Sending request(xid=139): Delete(path='/clickhouse/tables/test/replicas/replica2/mutation_pointer', version=-1) Received response(xid=139): True Sending request(xid=140): GetChildren(path='/clickhouse/tables/test/replicas/replica2/queue', watcher=None) Received response(xid=140): [] Sending request(xid=141): Delete(path='/clickhouse/tables/test/replicas/replica2/queue', version=-1) Received response(xid=141): True Sending request(xid=142): GetChildren(path='/clickhouse/tables/test/replicas/replica2/flags', watcher=None) Received response(xid=142): [] Sending request(xid=143): Delete(path='/clickhouse/tables/test/replicas/replica2/flags', version=-1) Received response(xid=143): True Sending request(xid=144): GetChildren(path='/clickhouse/tables/test/replicas/replica2/min_unprocessed_insert_time', watcher=None) Received response(xid=144): [] Sending request(xid=145): Delete(path='/clickhouse/tables/test/replicas/replica2/min_unprocessed_insert_time', version=-1) Received response(xid=145): True Sending request(xid=146): GetChildren(path='/clickhouse/tables/test/replicas/replica2/creator_info', watcher=None) Received response(xid=146): [] Sending request(xid=147): Delete(path='/clickhouse/tables/test/replicas/replica2/creator_info', version=-1) Received response(xid=147): True http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None Sending request(xid=148): GetChildren(path='/clickhouse/tables/test/replicas/replica2/max_processed_insert_time', watcher=None) Received response(xid=148): [] Sending request(xid=149): Delete(path='/clickhouse/tables/test/replicas/replica2/max_processed_insert_time', version=-1) Received response(xid=149): True Sending request(xid=150): Delete(path='/clickhouse/tables/test/replicas/replica2', version=-1) Executing query GRANT R4 TO R2 on instance Received response(xid=150): True Sending request(xid=151): Exists(path='/clickhouse/tables/test/replicas/replica2', watcher=None) Executing query SYSTEM RESTORE REPLICA test on replica1 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Executing query SELECT id FROM test_table order by id on node8 Executing query DROP TABLE IF EXISTS original SYNC on node Executing query SELECT getSetting('load_balancing') on node http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query GRANT R1,R2,R3 TO A on instance http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Executing query SELECT getSetting('alter_sync') on node http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query DROP TABLE IF EXISTS mydata SYNC on node http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None [gw9] PASSED test_replication_credentials/test.py::test_credentials_and_no_credentials Running tests in /ClickHouse/tests/integration/test_replication_credentials/test.py Instance directory already exists. Did you call cluster.start() for second time? Cluster start called. is_up=True test_replication_credentials/test.py::test_different_credentials Docker networks for project roottestreplicationcredentials-gw9 are NETWORK ID NAME DRIVER SCOPE ae46a17ea4d7 roottestreplicationcredentials-gw9_default bridge local Docker containers for project roottestreplicationcredentials-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 76971a0644be altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 8 seconds ago Up 7 seconds roottestreplicationcredentials-gw9-node2-1 4649a5d38d15 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 8 seconds ago Up 7 seconds roottestreplicationcredentials-gw9-node6-1 b226d563ef23 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 8 seconds ago Up 7 seconds roottestreplicationcredentials-gw9-node8-1 8d61da1a90d7 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 8 seconds ago Up 7 seconds roottestreplicationcredentials-gw9-node4-1 5c3acbcffc88 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 8 seconds ago Up 7 seconds roottestreplicationcredentials-gw9-node5-1 5b1874cc9120 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 8 seconds ago Up 7 seconds roottestreplicationcredentials-gw9-node1-1 4d686d5cedfd altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 8 seconds ago Up 7 seconds roottestreplicationcredentials-gw9-node3-1 f8d046e842db altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 8 seconds ago Up 7 seconds roottestreplicationcredentials-gw9-node7-1 fe3a8ea2cded altinityinfra/integration-test:8b2301119731 "clickhouse keeper -…" 13 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-zoo3-1 90bd81b20d53 altinityinfra/integration-test:8b2301119731 "clickhouse keeper -…" 13 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-zoo1-1 5aed177e74b6 altinityinfra/integration-test:8b2301119731 "clickhouse keeper -…" 13 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-zoo2-1 http://localhost:None "GET /v1.46/containers/c33c72aa877bbb83fdfec81b6dd4883b5fdcf500b8d83afb11c0835a41dd1e76/json HTTP/1.1" 200 None ClickHouse node started get_instance_ip instance_name=new_node http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=new_node http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in new_node, ip: 172.16.9.9... http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/6429e8cedd6d9d67bd40950617485be651ae5cac237ad176d6c4d9fcd9b18f78/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Docker volumes for project roottestreplicationcredentials-gw9 are DRIVER VOLUME NAME Executing query CREATE DATABASE test; CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test3/replicated', 'node5') PARTITION BY toYYYYMM(date) ORDER BY id; on node5 ClickHouse new_node started get_instance_ip instance_name=switching_node http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=switching_node http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in switching_node, ip: 172.16.9.10... http://localhost:None "GET /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/893784e922cf9d0cc3149fbd7ad8b9f99cacc632aed3131968817a0baf1b274b/json HTTP/1.1" 200 None Executing query SELECT defaultRoles(), currentRoles(), enabledRoles() on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SELECT+defaultRoles%28%29%2C+currentRoles%28%29%2C+enabledRoles%28%29 HTTP/1.1" 200 None Executing query SET ROLE R1 on instance via HTTP interface Sending request(xid=152): GetChildren(path='/clickhouse/tables/test/replicas/replica1', watcher=None) Starting new HTTP connection (1): 172.16.3.2:8123 Received response(xid=152): ['parts', 'max_processed_insert_time', 'is_active', 'metadata', 'host', 'log_pointer', 'metadata_version', 'is_lost', 'columns', 'flags', 'queue', 'min_unprocessed_insert_time', 'mutation_pointer', 'creator_info'] Sending request(xid=153): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts', watcher=None) Received response(xid=153): ['0_0_4_1', '6_3_3_0', '6_2_2_0', '0_4_4_0', '6_0_0_0', '1_1_1_0', '5_0_0_0', '8_0_4_1', '4_0_0_0', '3_1_1_0', '0_0_0_0', '2_3_3_0', '4_2_2_0', '6_0_4_1', '9_2_2_0', '1_2_2_0', '3_0_4_1', '5_1_1_0', '1_4_4_0', '7_3_3_0', '1_3_3_0', '4_3_3_0', '9_0_0_0', '8_4_4_0', '9_1_1_0', '7_1_1_0', '4_0_4_1', '3_2_2_0', '4_4_4_0', '5_0_4_1', '3_4_4_0', '7_0_0_0', '3_3_3_0', '9_4_4_0', '6_1_1_0', '8_0_0_0', '5_2_2_0', '3_0_0_0', '1_0_0_0', '2_1_1_0', '6_4_4_0', '0_2_2_0', '0_3_3_0', '2_4_4_0', '7_0_4_1', '7_4_4_0', '1_0_4_1', '7_2_2_0', '9_0_4_1', '2_0_4_1', '2_0_0_0', '8_2_2_0', '8_3_3_0', '5_3_3_0', '4_1_1_0', '2_2_2_0', '0_1_1_0', '9_3_3_0', '5_4_4_0', '8_1_1_0'] Sending request(xid=154): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/0_0_4_1', watcher=None) http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SET+ROLE+R1 HTTP/1.1" 200 None Received response(xid=154): [] Executing query SELECT defaultRoles(), currentRoles(), enabledRoles() on instance via HTTP interface Sending request(xid=155): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/0_0_4_1', version=-1) Starting new HTTP connection (1): 172.16.3.2:8123 Received response(xid=155): True Sending request(xid=156): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/6_3_3_0', watcher=None) Received response(xid=156): [] http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SELECT+defaultRoles%28%29%2C+currentRoles%28%29%2C+enabledRoles%28%29 HTTP/1.1" 200 None Sending request(xid=157): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/6_3_3_0', version=-1) Received response(xid=157): True Sending request(xid=158): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/6_2_2_0', watcher=None) Executing query DROP TABLE IF EXISTS mytable SYNC on node Received response(xid=158): [] Sending request(xid=159): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/6_2_2_0', version=-1) Executing query SET ROLE R2 on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 Received response(xid=159): True Sending request(xid=160): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/0_4_4_0', watcher=None) Received response(xid=160): [] http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SET+ROLE+R2 HTTP/1.1" 200 None Sending request(xid=161): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/0_4_4_0', version=-1) Executing query SELECT defaultRoles(), currentRoles(), enabledRoles() on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 Received response(xid=161): True Sending request(xid=162): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/6_0_0_0', watcher=None) Received response(xid=162): [] Sending request(xid=163): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/6_0_0_0', version=-1) Received response(xid=163): True Sending request(xid=164): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/1_1_1_0', watcher=None) Received response(xid=164): [] Sending request(xid=165): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/1_1_1_0', version=-1) http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SELECT+defaultRoles%28%29%2C+currentRoles%28%29%2C+enabledRoles%28%29 HTTP/1.1" 200 None Received response(xid=165): True Sending request(xid=166): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/5_0_0_0', watcher=None) Received response(xid=166): [] Sending request(xid=167): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/5_0_0_0', version=-1) Received response(xid=167): True Sending request(xid=168): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/8_0_4_1', watcher=None) Executing query SET ROLE NONE on instance via HTTP interface Received response(xid=168): [] Starting new HTTP connection (1): 172.16.3.2:8123 Sending request(xid=169): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/8_0_4_1', version=-1) Received response(xid=169): True Sending request(xid=170): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/4_0_0_0', watcher=None) http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SET+ROLE+NONE HTTP/1.1" 200 None Executing query SELECT defaultRoles(), currentRoles(), enabledRoles() on instance via HTTP interface Received response(xid=170): [] Starting new HTTP connection (1): 172.16.3.2:8123 Sending request(xid=171): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/4_0_0_0', version=-1) Received response(xid=171): True Sending request(xid=172): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/3_1_1_0', watcher=None) Received response(xid=172): [] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Sending request(xid=173): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/3_1_1_0', version=-1) Received response(xid=173): True [gw4] PASSED test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml] test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload Sending request(xid=174): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/0_0_0_0', watcher=None) Received response(xid=174): [] Sending request(xid=175): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/0_0_0_0', version=-1) http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Received response(xid=175): True Sending request(xid=176): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/2_3_3_0', watcher=None) Received response(xid=176): [] Sending request(xid=177): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/2_3_3_0', version=-1) http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SELECT+defaultRoles%28%29%2C+currentRoles%28%29%2C+enabledRoles%28%29 HTTP/1.1" 200 None Received response(xid=177): True Sending request(xid=178): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/4_2_2_0', watcher=None) Received response(xid=178): [] Sending request(xid=179): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/4_2_2_0', version=-1) Executing query SET ROLE DEFAULT on instance via HTTP interface Received response(xid=179): True Starting new HTTP connection (1): 172.16.3.2:8123 http://localhost:None "GET /v1.46/containers/893784e922cf9d0cc3149fbd7ad8b9f99cacc632aed3131968817a0baf1b274b/json HTTP/1.1" 200 None Sending request(xid=180): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/6_0_4_1', watcher=None) Received response(xid=180): [] Sending request(xid=181): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/6_0_4_1', version=-1) http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SET+ROLE+DEFAULT HTTP/1.1" 200 None Executing query SELECT defaultRoles(), currentRoles(), enabledRoles() on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 Received response(xid=181): True Sending request(xid=182): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/9_2_2_0', watcher=None) Received response(xid=182): [] Sending request(xid=183): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/9_2_2_0', version=-1) Received response(xid=183): True Sending request(xid=184): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/1_2_2_0', watcher=None) Received response(xid=184): [] Sending request(xid=185): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/1_2_2_0', version=-1) Received response(xid=185): True Sending request(xid=186): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/3_0_4_1', watcher=None) Received response(xid=186): [] Sending request(xid=187): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/3_0_4_1', version=-1) http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SELECT+defaultRoles%28%29%2C+currentRoles%28%29%2C+enabledRoles%28%29 HTTP/1.1" 200 None Received response(xid=187): True Sending request(xid=188): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/5_1_1_0', watcher=None) Received response(xid=188): [] Sending request(xid=189): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/5_1_1_0', version=-1) Received response(xid=189): True Sending request(xid=190): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/1_4_4_0', watcher=None) Received response(xid=190): [] Sending request(xid=191): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/1_4_4_0', version=-1) Received response(xid=191): True Executing query SET DEFAULT ROLE R2 TO A on instance Sending request(xid=192): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/7_3_3_0', watcher=None) Received response(xid=192): [] Sending request(xid=193): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/7_3_3_0', version=-1) Received response(xid=193): True Sending request(xid=194): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/1_3_3_0', watcher=None) Received response(xid=194): [] Sending request(xid=195): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/1_3_3_0', version=-1) Received response(xid=195): True Sending request(xid=196): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/4_3_3_0', watcher=None) Received response(xid=196): [] Sending request(xid=197): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/4_3_3_0', version=-1) Received response(xid=197): True Sending request(xid=198): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/9_0_0_0', watcher=None) Received response(xid=198): [] Sending request(xid=199): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/9_0_0_0', version=-1) Executing query SYSTEM RELOAD CONFIG on node Received response(xid=199): True Sending request(xid=200): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/8_4_4_0', watcher=None) Received response(xid=200): [] Sending request(xid=201): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/8_4_4_0', version=-1) Received response(xid=201): True Sending request(xid=202): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/9_1_1_0', watcher=None) Received response(xid=202): [] Sending request(xid=203): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/9_1_1_0', version=-1) Received response(xid=203): True Sending request(xid=204): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/7_1_1_0', watcher=None) Received response(xid=204): [] Sending request(xid=205): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/7_1_1_0', version=-1) Received response(xid=205): True Sending request(xid=206): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/4_0_4_1', watcher=None) Received response(xid=206): [] Sending request(xid=207): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/4_0_4_1', version=-1) Received response(xid=207): True Sending request(xid=208): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/3_2_2_0', watcher=None) Received response(xid=208): [] Sending request(xid=209): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/3_2_2_0', version=-1) Received response(xid=209): True Sending request(xid=210): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/4_4_4_0', watcher=None) Received response(xid=210): [] Sending request(xid=211): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/4_4_4_0', version=-1) Received response(xid=211): True Sending request(xid=212): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/5_0_4_1', watcher=None) Received response(xid=212): [] Sending request(xid=213): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/5_0_4_1', version=-1) Received response(xid=213): True Sending request(xid=214): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/3_4_4_0', watcher=None) Received response(xid=214): [] Sending request(xid=215): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/3_4_4_0', version=-1) http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Received response(xid=215): True Sending request(xid=216): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/7_0_0_0', watcher=None) Received response(xid=216): [] Sending request(xid=217): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/7_0_0_0', version=-1) Received response(xid=217): True Sending request(xid=218): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/3_3_3_0', watcher=None) Received response(xid=218): [] http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None Sending request(xid=219): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/3_3_3_0', version=-1) Received response(xid=219): True Sending request(xid=220): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/9_4_4_0', watcher=None) Received response(xid=220): [] Sending request(xid=221): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/9_4_4_0', version=-1) Received response(xid=221): True Sending request(xid=222): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/6_1_1_0', watcher=None) Received response(xid=222): [] Sending request(xid=223): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/6_1_1_0', version=-1) Received response(xid=223): True Sending request(xid=224): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/8_0_0_0', watcher=None) Received response(xid=224): [] Sending request(xid=225): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/8_0_0_0', version=-1) http://localhost:None "GET /v1.46/containers/893784e922cf9d0cc3149fbd7ad8b9f99cacc632aed3131968817a0baf1b274b/json HTTP/1.1" 200 None ClickHouse switching_node started Cluster started Received response(xid=225): True Sending request(xid=226): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/5_2_2_0', watcher=None) Executing query CREATE TABLE test_log_table ( id Int64, val String ) ENGINE=Log SETTINGS storage_policy='s3' on switching_node Received response(xid=226): [] Sending request(xid=227): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/5_2_2_0', version=-1) Received response(xid=227): True Sending request(xid=228): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/3_0_0_0', watcher=None) Received response(xid=228): [] Sending request(xid=229): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/3_0_0_0', version=-1) Received response(xid=229): True Sending request(xid=230): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/1_0_0_0', watcher=None) Received response(xid=230): [] Sending request(xid=231): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/1_0_0_0', version=-1) Received response(xid=231): True Sending request(xid=232): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/2_1_1_0', watcher=None) Received response(xid=232): [] Sending request(xid=233): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/2_1_1_0', version=-1) Received response(xid=233): True Sending request(xid=234): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/6_4_4_0', watcher=None) Received response(xid=234): [] Sending request(xid=235): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/6_4_4_0', version=-1) Received response(xid=235): True Sending request(xid=236): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/0_2_2_0', watcher=None) Received response(xid=236): [] Sending request(xid=237): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/0_2_2_0', version=-1) Received response(xid=237): True Sending request(xid=238): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/0_3_3_0', watcher=None) Received response(xid=238): [] Sending request(xid=239): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/0_3_3_0', version=-1) Received response(xid=239): True Sending request(xid=240): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/2_4_4_0', watcher=None) Received response(xid=240): [] Sending request(xid=241): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/2_4_4_0', version=-1) Received response(xid=241): True Sending request(xid=242): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/7_0_4_1', watcher=None) Received response(xid=242): [] Sending request(xid=243): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/7_0_4_1', version=-1) Received response(xid=243): True Sending request(xid=244): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/7_4_4_0', watcher=None) Received response(xid=244): [] Sending request(xid=245): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/7_4_4_0', version=-1) Received response(xid=245): True Sending request(xid=246): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/1_0_4_1', watcher=None) Received response(xid=246): [] Sending request(xid=247): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/1_0_4_1', version=-1) Received response(xid=247): True Sending request(xid=248): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/7_2_2_0', watcher=None) Received response(xid=248): [] Sending request(xid=249): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/7_2_2_0', version=-1) Received response(xid=249): True Sending request(xid=250): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/9_0_4_1', watcher=None) Received response(xid=250): [] Sending request(xid=251): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/9_0_4_1', version=-1) Received response(xid=251): True Sending request(xid=252): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/2_0_4_1', watcher=None) Received response(xid=252): [] Sending request(xid=253): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/2_0_4_1', version=-1) Received response(xid=253): True Sending request(xid=254): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/2_0_0_0', watcher=None) Executing query CREATE DATABASE test; CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test3/replicated', 'node6') PARTITION BY toYYYYMM(date) ORDER BY id; on node6 Received response(xid=254): [] Sending request(xid=255): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/2_0_0_0', version=-1) Received response(xid=255): True Sending request(xid=256): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/8_2_2_0', watcher=None) Received response(xid=256): [] Sending request(xid=257): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/8_2_2_0', version=-1) Executing query DROP TABLE IF EXISTS mymetrics SYNC on node Received response(xid=257): True Sending request(xid=258): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/8_3_3_0', watcher=None) Received response(xid=258): [] Sending request(xid=259): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/8_3_3_0', version=-1) Received response(xid=259): True Sending request(xid=260): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/5_3_3_0', watcher=None) Received response(xid=260): [] Sending request(xid=261): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/5_3_3_0', version=-1) Received response(xid=261): True Sending request(xid=262): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/4_1_1_0', watcher=None) Received response(xid=262): [] Sending request(xid=263): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/4_1_1_0', version=-1) Received response(xid=263): True Sending request(xid=264): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/2_2_2_0', watcher=None) Received response(xid=264): [] Sending request(xid=265): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/2_2_2_0', version=-1) Received response(xid=265): True Sending request(xid=266): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/0_1_1_0', watcher=None) Received response(xid=266): [] Sending request(xid=267): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/0_1_1_0', version=-1) http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Received response(xid=267): True Sending request(xid=268): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/9_3_3_0', watcher=None) Received response(xid=268): [] Sending request(xid=269): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/9_3_3_0', version=-1) Received response(xid=269): True Sending request(xid=270): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/5_4_4_0', watcher=None) Received response(xid=270): [] Sending request(xid=271): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/5_4_4_0', version=-1) http://localhost:None "GET /v1.46/containers/c97dc055a0a4232d43d530298f38a150824d749c2d2603d09d001e7589cae17f/json HTTP/1.1" 200 None ClickHouse s0_0_0 started get_instance_ip instance_name=s0_0_1 Received response(xid=271): True Sending request(xid=272): GetChildren(path='/clickhouse/tables/test/replicas/replica1/parts/8_1_1_0', watcher=None) Received response(xid=272): [] Sending request(xid=273): Delete(path='/clickhouse/tables/test/replicas/replica1/parts/8_1_1_0', version=-1) Received response(xid=273): True Sending request(xid=274): Delete(path='/clickhouse/tables/test/replicas/replica1/parts', version=-1) Received response(xid=274): True Sending request(xid=275): GetChildren(path='/clickhouse/tables/test/replicas/replica1/max_processed_insert_time', watcher=None) Received response(xid=275): [] Sending request(xid=276): Delete(path='/clickhouse/tables/test/replicas/replica1/max_processed_insert_time', version=-1) http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-s0_0_1-1/json HTTP/1.1" 200 None Received response(xid=276): True get_instance_ip instance_name=s0_0_1 Sending request(xid=277): GetChildren(path='/clickhouse/tables/test/replicas/replica1/is_active', watcher=None) Received response(xid=277): [] Sending request(xid=278): Delete(path='/clickhouse/tables/test/replicas/replica1/is_active', version=-1) http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-s0_0_1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in s0_0_1, ip: 172.16.6.9... http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-s0_0_1-1/json HTTP/1.1" 200 None Received response(xid=278): True Sending request(xid=279): GetChildren(path='/clickhouse/tables/test/replicas/replica1/metadata', watcher=None) Received response(xid=279): [] Sending request(xid=280): Delete(path='/clickhouse/tables/test/replicas/replica1/metadata', version=-1) http://localhost:None "GET /v1.46/containers/421a88fdd7260366cf325372c722ccb41663c4ebb08e1c7a84d52ed8d66eda59/json HTTP/1.1" 200 None ClickHouse s0_0_1 started get_instance_ip instance_name=s0_1_0 Received response(xid=280): True Sending request(xid=281): GetChildren(path='/clickhouse/tables/test/replicas/replica1/host', watcher=None) Received response(xid=281): [] Sending request(xid=282): Delete(path='/clickhouse/tables/test/replicas/replica1/host', version=-1) http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-s0_1_0-1/json HTTP/1.1" 200 None Received response(xid=282): True get_instance_ip instance_name=s0_1_0 Sending request(xid=283): GetChildren(path='/clickhouse/tables/test/replicas/replica1/log_pointer', watcher=None) Received response(xid=283): [] Sending request(xid=284): Delete(path='/clickhouse/tables/test/replicas/replica1/log_pointer', version=-1) http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-s0_1_0-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in s0_1_0, ip: 172.16.6.10... http://localhost:None "GET /v1.46/containers/roottests3cluster-gw5-s0_1_0-1/json HTTP/1.1" 200 None Received response(xid=284): True Sending request(xid=285): GetChildren(path='/clickhouse/tables/test/replicas/replica1/metadata_version', watcher=None) Received response(xid=285): [] http://localhost:None "GET /v1.46/containers/192cb9848c73985501f9b1f08fbf3e669ef8d1f4b01f0bd2f7cec09ddd2cf4e5/json HTTP/1.1" 200 None ClickHouse s0_1_0 started Cluster started Sending request(xid=286): Delete(path='/clickhouse/tables/test/replicas/replica1/metadata_version', version=-1) Received response(xid=286): True Sending request(xid=287): GetChildren(path='/clickhouse/tables/test/replicas/replica1/is_lost', watcher=None) Received response(xid=287): [] Sending request(xid=288): Delete(path='/clickhouse/tables/test/replicas/replica1/is_lost', version=-1) Received response(xid=288): True Sending request(xid=289): GetChildren(path='/clickhouse/tables/test/replicas/replica1/columns', watcher=None) Received response(xid=289): [] Sending request(xid=290): Delete(path='/clickhouse/tables/test/replicas/replica1/columns', version=-1) Executing query SELECT defaultRoles(), currentRoles(), enabledRoles() on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 Received response(xid=290): True Sending request(xid=291): GetChildren(path='/clickhouse/tables/test/replicas/replica1/flags', watcher=None) Received response(xid=291): [] Sending request(xid=292): Delete(path='/clickhouse/tables/test/replicas/replica1/flags', version=-1) Received response(xid=292): True Sending request(xid=293): GetChildren(path='/clickhouse/tables/test/replicas/replica1/queue', watcher=None) Received response(xid=293): [] Sending request(xid=294): Delete(path='/clickhouse/tables/test/replicas/replica1/queue', version=-1) Received response(xid=294): True Sending request(xid=295): GetChildren(path='/clickhouse/tables/test/replicas/replica1/min_unprocessed_insert_time', watcher=None) Received response(xid=295): [] Sending request(xid=296): Delete(path='/clickhouse/tables/test/replicas/replica1/min_unprocessed_insert_time', version=-1) http://172.16.6.8:9001 "PUT /root/data/clickhouse/part1.csv HTTP/1.1" 200 0 Received response(xid=296): True Sending request(xid=297): GetChildren(path='/clickhouse/tables/test/replicas/replica1/mutation_pointer', watcher=None) http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SELECT+defaultRoles%28%29%2C+currentRoles%28%29%2C+enabledRoles%28%29 HTTP/1.1" 200 None Received response(xid=297): [] Sending request(xid=298): Delete(path='/clickhouse/tables/test/replicas/replica1/mutation_pointer', version=-1) http://172.16.6.8:9001 "PUT /root/data/clickhouse/part123.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/database/part2.csv HTTP/1.1" 200 0 Received response(xid=298): True Sending request(xid=299): GetChildren(path='/clickhouse/tables/test/replicas/replica1/creator_info', watcher=None) Executing query REVOKE R3 FROM A on instance Received response(xid=299): [] Sending request(xid=300): Delete(path='/clickhouse/tables/test/replicas/replica1/creator_info', version=-1) http://172.16.6.8:9001 "PUT /root/data/database/partition675.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_0.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_1.csv HTTP/1.1" 200 0 Received response(xid=300): True Sending request(xid=301): Delete(path='/clickhouse/tables/test/replicas/replica1', version=-1) http://172.16.6.8:9001 "PUT /root/data/generated/file_2.csv HTTP/1.1" 200 0 Received response(xid=301): True Sending request(xid=302): Exists(path='/clickhouse/tables/test/replicas/replica1', watcher=None) Executing query SYSTEM RESTART REPLICA test on replica1 http://172.16.6.8:9001 "PUT /root/data/generated/file_3.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_4.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_5.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_6.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_7.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_8.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_9.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_10.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_11.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_12.csv HTTP/1.1" 200 0 run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDx4eXo+ODwveHl6PgogICAgICAgIDwvZGVmYXVsdD4KICAgIDwvcHJvZmlsZXM+CjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDx4eXo+ODwveHl6PgogICAgICAgIDwvZGVmYXVsdD4KICAgIDwvcHJvZmlsZXM+CjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/users.d/z.xml] http://172.16.6.8:9001 "PUT /root/data/generated/file_13.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_14.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_15.csv HTTP/1.1" 200 0 Executing query INSERT INTO test_log_table VALUES (0, 'a') on switching_node http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://172.16.6.8:9001 "PUT /root/data/generated/file_16.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_17.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_18.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_19.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_20.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_21.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_22.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_23.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_24.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_25.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_26.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_27.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_28.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_29.csv HTTP/1.1" 200 0 Executing query SYSTEM RELOAD CONFIG on node http://172.16.6.8:9001 "PUT /root/data/generated/file_30.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_31.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_32.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_33.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_34.csv HTTP/1.1" 200 0 test_prometheus_protocols/test.py::test_custom_id_algorithm Executing query CREATE TABLE prometheus (id FixedString(16) DEFAULT murmurHash3_128(metric_name, all_tags)) ENGINE=TimeSeries on node http://172.16.6.8:9001 "PUT /root/data/generated/file_35.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_36.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_37.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_38.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_39.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_40.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_41.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_42.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_43.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_44.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_45.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_46.csv HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://172.16.6.8:9001 "PUT /root/data/generated/file_47.csv HTTP/1.1" 200 0 Executing query insert into test_table values ('2017-06-20', 111, 0) on node5 http://172.16.6.8:9001 "PUT /root/data/generated/file_48.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_49.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_50.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_51.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_52.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_53.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_54.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_55.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_56.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_57.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_58.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_59.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_60.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_61.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_62.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_63.csv HTTP/1.1" 200 0 Executing query SELECT defaultRoles(), currentRoles(), enabledRoles() on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.6.8:9001 "PUT /root/data/generated/file_64.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_65.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_66.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_67.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_68.csv HTTP/1.1" 200 0 http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SELECT+defaultRoles%28%29%2C+currentRoles%28%29%2C+enabledRoles%28%29 HTTP/1.1" 200 None Executing query REVOKE R2 FROM A on instance http://172.16.6.8:9001 "PUT /root/data/generated/file_69.csv HTTP/1.1" 200 0 Executing query SYSTEM RESTORE REPLICA test on replica1 http://172.16.6.8:9001 "PUT /root/data/generated/file_70.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_71.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_72.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_73.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_74.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_75.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_76.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_77.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_78.csv HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None http://172.16.6.8:9001 "PUT /root/data/generated/file_79.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_80.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_81.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_82.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_83.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_84.csv HTTP/1.1" 200 0 Executing query SELECT count() FROM test_log_table on switching_node http://172.16.6.8:9001 "PUT /root/data/generated/file_85.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_86.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_87.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_88.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_89.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_90.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_91.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_92.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_93.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_94.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_95.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_96.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_97.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_98.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "PUT /root/data/generated/file_99.csv HTTP/1.1" 200 0 http://172.16.6.8:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix= HTTP/1.1" 200 0 Starting mock server s3_mock.py run container_id:roottests3cluster-gw5-resolver-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname s3_mock.py) && echo aW1wb3J0IHN5cwoKZnJvbSBib3R0bGUgaW1wb3J0IHJlcXVlc3QsIHJlc3BvbnNlLCByb3V0ZSwgcnVuCgoKQHJvdXRlKCIvPF9idWNrZXQ+LzxfcGF0aDpwYXRoPiIpCmRlZiBzZXJ2ZXIoX2J1Y2tldCwgX3BhdGgpOgogICAgcmVzdWx0ID0gKAogICAgICAgIHJlcXVlc3QuaGVhZGVyc1siTXlDdXN0b21IZWFkZXIiXQogICAgICAgIGlmICJNeUN1c3RvbUhlYWRlciIgaW4gcmVxdWVzdC5oZWFkZXJzCiAgICAgICAgZWxzZSAidW5rbm93biIKICAgICkKICAgIHJlc3BvbnNlLmNvbnRlbnRfdHlwZSA9ICJ0ZXh0L3BsYWluIgogICAgcmVzcG9uc2Uuc2V0X2hlYWRlcigiQ29udGVudC1MZW5ndGgiLCBsZW4ocmVzdWx0KSkKICAgIHJldHVybiByZXN1bHQKCgpAcm91dGUoIi8iKQpkZWYgcGluZygpOgogICAgcmVzcG9uc2UuY29udGVudF90eXBlID0gInRleHQvcGxhaW4iCiAgICByZXNwb25zZS5zZXRfaGVhZGVyKCJDb250ZW50LUxlbmd0aCIsIDIpCiAgICByZXR1cm4gIk9LIgoKCnJ1bihob3N0PSIwLjAuMC4wIiwgcG9ydD1pbnQoc3lzLmFyZ3ZbMV0pKQo= | base64 --decode > s3_mock.py'] Command:[docker exec roottests3cluster-gw5-resolver-1 bash -c mkdir -p $(dirname s3_mock.py) && echo aW1wb3J0IHN5cwoKZnJvbSBib3R0bGUgaW1wb3J0IHJlcXVlc3QsIHJlc3BvbnNlLCByb3V0ZSwgcnVuCgoKQHJvdXRlKCIvPF9idWNrZXQ+LzxfcGF0aDpwYXRoPiIpCmRlZiBzZXJ2ZXIoX2J1Y2tldCwgX3BhdGgpOgogICAgcmVzdWx0ID0gKAogICAgICAgIHJlcXVlc3QuaGVhZGVyc1siTXlDdXN0b21IZWFkZXIiXQogICAgICAgIGlmICJNeUN1c3RvbUhlYWRlciIgaW4gcmVxdWVzdC5oZWFkZXJzCiAgICAgICAgZWxzZSAidW5rbm93biIKICAgICkKICAgIHJlc3BvbnNlLmNvbnRlbnRfdHlwZSA9ICJ0ZXh0L3BsYWluIgogICAgcmVzcG9uc2Uuc2V0X2hlYWRlcigiQ29udGVudC1MZW5ndGgiLCBsZW4ocmVzdWx0KSkKICAgIHJldHVybiByZXN1bHQKCgpAcm91dGUoIi8iKQpkZWYgcGluZygpOgogICAgcmVzcG9uc2UuY29udGVudF90eXBlID0gInRleHQvcGxhaW4iCiAgICByZXNwb25zZS5zZXRfaGVhZGVyKCJDb250ZW50LUxlbmd0aCIsIDIpCiAgICByZXR1cm4gIk9LIgoKCnJ1bihob3N0PSIwLjAuMC4wIiwgcG9ydD1pbnQoc3lzLmFyZ3ZbMV0pKQo= | base64 --decode > s3_mock.py] Starting new HTTP connection (1): 172.16.10.2:9090 http://172.16.10.2:9090 "GET /api/v1/query?query=up&time=1743563972.84838 HTTP/1.1" 200 162 http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SELECT getSetting('max_memory_usage') on node run container_id:roottests3cluster-gw5-resolver-1 detach:True nothrow:False cmd: ['bash', '-c', 'python3 s3_mock.py 8080 >/var/log/resolver/s3_mock.log 2>/var/log/resolver/s3_mock.err.log'] Command:[docker exec roottests3cluster-gw5-resolver-1 bash -c python3 s3_mock.py 8080 >/var/log/resolver/s3_mock.log 2>/var/log/resolver/s3_mock.err.log] run container_id:roottests3cluster-gw5-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8080/'] Command:[docker exec roottests3cluster-gw5-resolver-1 curl -s http://localhost:8080/] Executing query SELECT defaultRoles(), currentRoles(), enabledRoles() on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.3.2:8123 "GET /?session_id=session+%233&query=SELECT+defaultRoles%28%29%2C+currentRoles%28%29%2C+enabledRoles%28%29 HTTP/1.1" 200 None Executing query SET DEFAULT ROLE ALL TO A on instance http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Executing query SYSTEM RESTART REPLICA test on replica2 Exitcode:7 run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat /etc/clickhouse-server/config.d/switching_node.xml'] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 bash -c cat /etc/clickhouse-server/config.d/switching_node.xml] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None Stdout: Stdout: Stdout: Stdout: 0 Stdout: run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "sed -i 's/0101 /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDxtYXhfbWVtb3J5X3VzYWdlPjEwMDAwMDAwMDAwPC9tYXhfbWVtb3J5X3VzYWdlPgogICAgICAgICAgICA8bG9hZF9iYWxhbmNpbmc+Zmlyc3Rfb3JfcmFuZG9tPC9sb2FkX2JhbGFuY2luZz4KICAgICAgICAgICAgPHJlcGxpY2F0aW9uX2FsdGVyX3BhcnRpdGlvbnNfc3luYz4yPC9yZXBsaWNhdGlvbl9hbHRlcl9wYXJ0aXRpb25zX3N5bmM+CiAgICAgICAgPC9kZWZhdWx0PgogICAgPC9wcm9maWxlcz4KPC9jbGlja2hvdXNlPgo= | base64 --decode > /etc/clickhouse-server/users.d/z.xml] http://localhost:None "GET /v1.46/containers/d950a44bd15f5aa436aa5115cf0dfd149e7230e576f191ffbcf0643eda939b37/json HTTP/1.1" 200 None ClickHouse node started Executing query CREATE TABLE distributed (id UInt32) ENGINE = Distributed('test_cluster', 'default', 'replicated') on node Executing query SYSTEM RELOAD CONFIG on node Executing query CREATE ROLE R1 on instance Executing query SELECT sum(n), count() FROM test on replica2 Executing query CREATE TABLE distributed2 (id UInt32) ENGINE = Distributed('test_cluster2', 'default', 'replicated') on node Starting new HTTP connection (1): 172.16.10.3:9091 Executing query SELECT id FROM test_table order by id on node5 Executing query CREATE ROLE R2 on instance http://172.16.10.3:9091 "GET /api/v1/query?query=up&time=1743563972.84838 HTTP/1.1" 200 87 Starting new HTTP connection (1): 172.16.10.2:9090 http://172.16.10.2:9090 "GET /api/v1/query?query=up&time=1743563972.84838 HTTP/1.1" 200 162 Executing query SELECT sum(n), count() FROM test on replica3 run container_id:roottests3cluster-gw5-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8080/'] Command:[docker exec roottests3cluster-gw5-resolver-1 curl -s http://localhost:8080/] Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT * FROM test_table on instance Stdout:OK s3_mock.py answered OK on attempt 2 Mock server s3_mock.py started Executing query SELECT l.name, r.value from s3Cluster( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') as l JOIN s3Cluster( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') as r ON l.name = r.name on s0_0_0 Executing query SELECT id FROM test_table order by id on node6 Executing query INSERT INTO test SELECT number + 1000 FROM numbers(1000) on replica1 run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query GRANT R1 TO A on instance Stdout:8 Executing query insert into test_table values ('2017-06-21', 222, 1) on node6 Executing query SELECT * FROM test_table on instance Executing query GRANT R2 TO R1 on instance Executing query SYSTEM SYNC REPLICA test on replica2 [gw5] PASSED test_s3_cluster/test.py::test_ambiguous_join test_s3_cluster/test.py::test_cluster_default_expression Executing query insert into function s3('http://minio1:9001/root/data/data1', 'minio', 'minio123', JSONEachRow) select 1 as id settings s3_truncate_on_insert=1 on s0_0_0 run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDx4eXo+ODwveHl6PgogICAgICAgIDwvZGVmYXVsdD4KICAgIDwvcHJvZmlsZXM+CjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/users.d/z.xml'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/users.d/z.xml) && echo PGNsaWNraG91c2U+CiAgICA8cHJvZmlsZXMgcmVwbGFjZT0icmVwbGFjZSI+CiAgICAgICAgPGRlZmF1bHQ+CiAgICAgICAgICAgIDx4eXo+ODwveHl6PgogICAgICAgIDwvZGVmYXVsdD4KICAgIDwvcHJvZmlsZXM+CjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/users.d/z.xml] Executing query SELECT * FROM test_table on instance Executing query SYSTEM SYNC REPLICA test on replica3 run container_id:roottestreloadingsettingsfromusersxml-gw4-node-1 detach:False nothrow:False cmd: ['bash', '-c', '[ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "Setting xyz is neither a builtin setting nor started with the prefix \'custom_\' registered for user-defined settings" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true'] Command:[docker exec roottestreloadingsettingsfromusersxml-gw4-node-1 bash -c [ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "Setting xyz is neither a builtin setting nor started with the prefix 'custom_' registered for user-defined settings" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true] Stdout:/var/log/clickhouse-server/clickhouse-server.log:2025.04.02 03:19:32.796922 [ 9 ] {3d49c6eb-ed24-4781-8cdc-58ac24a08c09} executeQuery: Code: 347. DB::Exception: Code: 115. DB::Exception: Setting xyz is neither a builtin setting nor started with the prefix 'custom_' registered for user-defined settings: while parsing profile 'default' in users configuration file: while loading configuration file '/etc/clickhouse-server/users.xml'. (UNKNOWN_SETTING), Stack trace (when copying this message, always include the lines below): Stdout:/var/log/clickhouse-server/clickhouse-server.log:2025.04.02 03:19:32.797616 [ 9 ] {} TCPHandler: Code: 347. DB::Exception: Code: 115. DB::Exception: Setting xyz is neither a builtin setting nor started with the prefix 'custom_' registered for user-defined settings: while parsing profile 'default' in users configuration file: while loading configuration file '/etc/clickhouse-server/users.xml'. (UNKNOWN_SETTING), Stack trace (when copying this message, always include the lines below): Executing query SELECT getSetting('max_memory_usage') on node Executing query insert into function s3('http://minio1:9001/root/data/data2', 'minio', 'minio123', JSONEachRow) select * from numbers(0) settings s3_truncate_on_insert=1 on s0_0_0 Starting new HTTP connection (1): 172.16.10.3:9091 Executing query GRANT SELECT ON test_table TO R2 on instance Executing query SELECT sum(n), count() FROM test on replica1 http://172.16.10.3:9091 "GET /api/v1/query?query=up&time=1743563972.84838 HTTP/1.1" 200 162 Executing query DROP TABLE IF EXISTS prometheus SYNC on node [gw2] PASSED test_prometheus_protocols/test.py::test_custom_id_algorithm Executing query SELECT getSetting('load_balancing') on node Executing query SELECT * FROM test_table on instance Executing query insert into function s3('http://minio1:9001/root/data/data3', 'minio', 'minio123', JSONEachRow) select 2 as id settings s3_truncate_on_insert=1 on s0_0_0 Executing query DROP TABLE IF EXISTS original SYNC on node Executing query SELECT sum(n), count() FROM test on replica2 run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT getSetting('alter_sync') on node Stdout:8 Executing query DROP USER IF EXISTS A, B on instance [gw1] PASSED test_role/test.py::test_grant_role_to_role Executing query SELECT * FROM s3('http://minio1:9001/root/data/data{1,2,3}', 'minio', 'minio123', 'JSONEachRow', 'id UInt32, date Date DEFAULT 18262') order by id on s0_0_0 Executing query DROP TABLE IF EXISTS mydata SYNC on node Executing query SELECT sum(n), count() FROM test on replica3 Executing query DROP ROLE IF EXISTS R1, R2, R3, R4 on instance Command:[docker compose --env-file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/.env --project-name roottestreloadingsettingsfromusersxml-gw4 --file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/docker-compose.yml stop --timeout 20] [gw4] PASSED test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout Executing query SELECT id FROM test_table order by id on node5 Executing query SELECT * FROM s3Cluster(cluster_simple, 'http://minio1:9001/root/data/data{1,2,3}', 'minio', 'minio123', 'JSONEachRow', 'id UInt32, date Date DEFAULT 18262') order by id on s0_0_0 Executing query DROP TABLE IF EXISTS mytable SYNC on node test_role/test.py::test_introspection Executing query CREATE USER A on instance Executing query SELECT id FROM test_table order by id on node6 Executing query DROP TABLE IF EXISTS mymetrics SYNC on node Executing query SYSTEM RESTORE REPLICA test on replica1 Executing query SELECT * FROM s3Cluster(cluster_simple, 'http://minio1:9001/root/data/data{1,2,3}', 'minio', 'minio123', 'auto', 'id UInt32, date Date DEFAULT 18262') order by id on s0_0_0 Executing query CREATE USER B on instance run container_id:roottestreplicationcredentials-gw9-node5-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n \n 9009\n \n admin\n 222\n \n root\n 111\n \n \n aaa\n 333\n \n \n \n ' > /etc/clickhouse-server/config.d/credentials1.xml"] Command:[docker exec roottestreplicationcredentials-gw9-node5-1 bash -c echo ' 9009 admin 222 root 111 aaa 333 ' > /etc/clickhouse-server/config.d/credentials1.xml] Executing query CREATE TABLE prometheus ENGINE=TimeSeries on node test_prometheus_protocols/test.py::test_default Executing query SYSTEM RESTORE REPLICA test on replica2 Executing query SYSTEM RELOAD CONFIG on node5 Executing query CREATE ROLE R1 on instance Executing query SELECT * FROM s3Cluster(cluster_simple, 'http://minio1:9001/root/data/data{1,2,3}', 'minio', 'minio123', 'JSONEachRow', 'id UInt32, date Date DEFAULT 18262', 'auto') order by id on s0_0_0 Starting new HTTP connection (1): 172.16.10.2:9090 http://172.16.10.2:9090 "GET /api/v1/query?query=up&time=1743563976.2728918 HTTP/1.1" 200 162 Executing query SYSTEM RESTORE REPLICA test on replica3 Executing query INSERT INTO test_table values('2017-06-21', 333, 1) on node5 Executing query CREATE ROLE R2 on instance run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query SELECT * FROM s3Cluster(cluster_simple, 'http://minio1:9001/root/data/data{1,2,3}', 'minio', 'minio123', 'auto', 'id UInt32, date Date DEFAULT 18262', 'auto') order by id on s0_0_0 [gw8] PASSED test_restore_replica/test.py::test_restore_replica_alive_replicas test_restore_replica/test.py::test_restore_replica_invalid_tables Executing query SYSTEM RESTORE REPLICA i_dont_exist_42 on replica1 Executing query GRANT R1 TO A on instance Executing query SYSTEM SYNC REPLICA test_table on node6 Executing query GRANT R2 TO B WITH ADMIN OPTION on instance Executing query SYSTEM RESTORE REPLICA no_db.i_dont_exist_42 on replica1 Executing query SELECT id FROM test_table order by id on node6 Executing query SELECT * FROM s3Cluster(cluster_simple, test_s3_with_default) order by id on s0_0_0 Executing query GRANT SELECT ON test.table TO A, R2 on instance Executing query SYSTEM RESTORE REPLICA system.numbers on replica1 [gw9] PASSED test_replication_credentials/test.py::test_different_credentials test_replication_credentials/test.py::test_no_credentials Running tests in /ClickHouse/tests/integration/test_replication_credentials/test.py Instance directory already exists. Did you call cluster.start() for second time? Cluster start called. is_up=True Docker networks for project roottestreplicationcredentials-gw9 are NETWORK ID NAME DRIVER SCOPE ae46a17ea4d7 roottestreplicationcredentials-gw9_default bridge local Docker containers for project roottestreplicationcredentials-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 76971a0644be altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 12 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-node2-1 4649a5d38d15 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 12 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-node6-1 b226d563ef23 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 12 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-node8-1 8d61da1a90d7 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 12 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-node4-1 5c3acbcffc88 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 12 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-node5-1 5b1874cc9120 altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 12 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-node1-1 4d686d5cedfd altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 12 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-node3-1 f8d046e842db altinityinfra/integration-test:8b2301119731 "clickhouse server -…" 12 seconds ago Up 12 seconds roottestreplicationcredentials-gw9-node7-1 fe3a8ea2cded altinityinfra/integration-test:8b2301119731 "clickhouse keeper -…" 17 seconds ago Up 17 seconds roottestreplicationcredentials-gw9-zoo3-1 90bd81b20d53 altinityinfra/integration-test:8b2301119731 "clickhouse keeper -…" 17 seconds ago Up 17 seconds roottestreplicationcredentials-gw9-zoo1-1 5aed177e74b6 altinityinfra/integration-test:8b2301119731 "clickhouse keeper -…" 17 seconds ago Up 17 seconds roottestreplicationcredentials-gw9-zoo2-1 Docker volumes for project roottestreplicationcredentials-gw9 are DRIVER VOLUME NAME Executing query CREATE DATABASE test; CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test2/replicated', 'node3') PARTITION BY toYYYYMM(date) ORDER BY id; on node3 [gw5] PASSED test_s3_cluster/test.py::test_cluster_default_expression test_s3_cluster/test.py::test_cluster_format_detection Executing query desc s3('http://minio1:9001/root/data/generated/*', 'minio', 'minio123', 'CSV') on s0_0_0 Executing query GRANT CREATE ON *.* TO B WITH GRANT OPTION on instance [gw8] PASSED test_restore_replica/test.py::test_restore_replica_invalid_tables test_restore_replica/test.py::test_restore_replica_parallel get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestrestorereplica-gw8-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.2.4, port:2181, use_ssl:False Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Executing query DROP TABLE IF EXISTS test SYNC on replica1 Executing query REVOKE SELECT(x) ON test.table FROM R2 on instance Starting new HTTP connection (1): 172.16.10.3:9091 Executing query CREATE DATABASE test; CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test2/replicated', 'node4') PARTITION BY toYYYYMM(date) ORDER BY id; on node4 Executing query desc s3('http://minio1:9001/root/data/generated/*', 'minio', 'minio123') on s0_0_0 http://172.16.10.3:9091 "GET /api/v1/query?query=up&time=1743563976.2728918 HTTP/1.1" 200 87 Starting new HTTP connection (1): 172.16.10.2:9090 http://172.16.10.2:9090 "GET /api/v1/query?query=up&time=1743563976.2728918 HTTP/1.1" 200 162 Executing query DROP TABLE IF EXISTS test SYNC on replica2 run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SHOW ROLES on instance run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1/exec HTTP/1.1" 201 74 Executing query SELECT * FROM s3('http://minio1:9001/root/data/generated/*', 'minio', 'minio123', 'CSV', 'a String, b UInt64') order by a, b on s0_0_0 http://localhost:None "POST /v1.46/exec/ef4dfb9f79fa28024cd5e95a69338b72de8c1cc882af8f98a4a4cbbd1bc10603/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/ef4dfb9f79fa28024cd5e95a69338b72de8c1cc882af8f98a4a4cbbd1bc10603/json HTTP/1.1" 200 586 Executing query insert into test_table values ('2017-06-18', 111, 0) on node3 Executing query DROP TABLE IF EXISTS test SYNC on replica3 Executing query SHOW CREATE ROLE R1 on instance Executing query CREATE TABLE test(n UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', 'replica1') ORDER BY n PARTITION BY n % 10; on replica1 Executing query SHOW CREATE ROLE R2 on instance Executing query SELECT * FROM s3Cluster(cluster_simple, 'http://minio1:9001/root/data/generated/*', 'minio', 'minio123') order by c1, c2 on s0_0_0 Executing query CREATE TABLE test(n UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', 'replica2') ORDER BY n PARTITION BY n % 10; on replica2 Executing query SHOW CREATE ROLES R1, R2 on instance Executing query SHOW CREATE ROLES on instance Executing query CREATE TABLE test(n UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/', 'replica3') ORDER BY n PARTITION BY n % 10; on replica3 Starting new HTTP connection (1): 172.16.10.3:9091 Executing query SELECT * FROM s3Cluster(cluster_simple, 'http://minio1:9001/root/data/generated/*', 'minio', 'minio123', auto, 'a String, b UInt64') order by a, b on s0_0_0 http://172.16.10.3:9091 "GET /api/v1/query?query=up&time=1743563976.2728918 HTTP/1.1" 200 162 Executing query DROP TABLE IF EXISTS prometheus SYNC on node [gw2] PASSED test_prometheus_protocols/test.py::test_default run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SHOW GRANTS FOR A on instance Stdout:800 Clickhouse process running. run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT sum(n), count() FROM test on replica1 Stdout:800 Executing query select 20 on switching_node Executing query DROP TABLE IF EXISTS original SYNC on node Executing query SELECT id FROM test_table order by id on node3 Executing query SHOW GRANTS FOR B on instance Executing query SELECT sum(n), count() FROM test on replica2 Executing query DROP TABLE IF EXISTS mydata SYNC on node [gw5] PASSED test_s3_cluster/test.py::test_cluster_format_detection Executing query SELECT * from s3('http://resolver:8080/bucket/key.csv', headers(MyCustomHeader = 'SomeValue')) on s0_0_0 Executing query SELECT id FROM test_table order by id on node4 test_s3_cluster/test.py::test_cluster_with_header Executing query SHOW GRANTS FOR R1 on instance Executing query SELECT sum(n), count() FROM test on replica3 Executing query DROP TABLE IF EXISTS mytable SYNC on node Executing query SELECT DISTINCT(name) FROM system.tables WHERE engine='View' and name='COLUMNS' on node1 Executing query insert into test_table values ('2017-06-19', 222, 1) on node4 Executing query SELECT * from s3('http://resolver:8080/bucket/key.csv', headers(MyCustomHeader = 'SomeValue'), 'CSV') on s0_0_0 Executing query SHOW GRANTS FOR R2 on instance Executing query INSERT INTO test SELECT number + 0 FROM numbers(200) on replica1 Stderr: zoo2 Skipped - Image is already being pulled by zoo1 Stderr: zoo3 Skipped - Image is already being pulled by zoo1 Stderr: node2 Skipped - Image is already being pulled by zoo1 Stderr: node1 Skipped - Image is already being pulled by zoo1 Stderr: zoo1 Pulling Stderr: zoo1 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper1/log', '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper1/config', '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper1/coordination', '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper2/log', '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper2/config', '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper2/coordination', '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper3/log', '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper3/config', '/ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/keeper3/coordination'] Command:[docker compose --project-name roottestrecompressionttl-gw6 --env-file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Executing query DROP TABLE IF EXISTS mymetrics SYNC on node Executing query select 20 on switching_node Executing query SHOW GRANTS on instance Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Stopping Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/.env --project-name roottestreloadingsettingsfromusersxml-gw4 --file /ClickHouse/tests/integration/test_reloading_settings_from_users_xml/_instances-0-gw4/node/docker-compose.yml down --volumes] Executing query DROP TABLE IF EXISTS mydata on node test_prometheus_protocols/test.py::test_external_tables Executing query INSERT INTO test_log_table VALUES (0, 'a') on switching_node Executing query SHOW GRANTS FOR R1 on instance Executing query INSERT INTO test SELECT number + 200 FROM numbers(200) on replica1 Executing query SELECT * from s3Cluster('cluster_simple', 'http://resolver:8080/bucket/key.csv', headers(MyCustomHeader = 'SomeValue')) on s0_0_0 Executing query DROP TABLE IF EXISTS mytags on node Executing query SHOW GRANTS FOR R2 on instance Executing query SELECT count() FROM test_log_table on switching_node Executing query SELECT id FROM test_table order by id on node3 Executing query DROP TABLE IF EXISTS mymetrics on node Executing query SHOW GRANTS on instance Executing query SELECT id FROM test_table order by id on node4 Executing query SELECT * from s3Cluster('cluster_simple', 'http://resolver:8080/bucket/key.csv', headers(MyCustomHeader = 'SomeValue'), 'CSV') on s0_0_0 Executing query DROP TABLE IF EXISTS prometheus on node Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Stopping Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Stopped Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Removing Stderr: Container roottestreloadingsettingsfromusersxml-gw4-node-1 Removed Stderr: Network roottestreloadingsettingsfromusersxml-gw4_default Removing Stderr: Network roottestreloadingsettingsfromusersxml-gw4_default Removed Cleanup called Executing query INSERT INTO test SELECT number + 400 FROM numbers(200) on replica1 Docker networks for project roottestreloadingsettingsfromusersxml-gw4 are NETWORK ID NAME DRIVER SCOPE run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat /etc/clickhouse-server/config.d/switching_node.xml'] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 bash -c cat /etc/clickhouse-server/config.d/switching_node.xml] Docker containers for project roottestreloadingsettingsfromusersxml-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreloadingsettingsfromusersxml-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreloadingsettingsfromusersxml-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Stdout: Stdout: Stdout: Stdout: 1 Stdout: run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "sed -i 's/1010 Stdout: Stdout: Stdout: 0 Stdout: run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 detach:False nothrow:False cmd: ['bash', '-c', "sed -i 's/0101 /var/lib/clickhouse/disks/s3/store/02f/02fed35a-6a76-4984-a49e-a48ecf03f3c3/detached/all_1_1_0/primary.cidx"] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 bash -c echo '5 1 50 50 old-style-prefix/with-several-section/nhd/vvvhcvoglfrboamrzuoazowhqiini 0 1 ' > /var/lib/clickhouse/disks/s3/store/02f/02fed35a-6a76-4984-a49e-a48ecf03f3c3/detached/all_1_1_0/primary.cidx] Received response(xid=728): True Sending request(xid=729): GetChildren(path='/clickhouse/tables/test/blocks/5_4397825682830605283_16079438150578917708', watcher=None) Received response(xid=729): [] Sending request(xid=730): Delete(path='/clickhouse/tables/test/blocks/5_4397825682830605283_16079438150578917708', version=-1) Received response(xid=730): True Sending request(xid=731): GetChildren(path='/clickhouse/tables/test/blocks/6_2855772191332027362_138014351105121520', watcher=None) Received response(xid=731): [] Sending request(xid=732): Delete(path='/clickhouse/tables/test/blocks/6_2855772191332027362_138014351105121520', version=-1) Received response(xid=732): True Sending request(xid=733): GetChildren(path='/clickhouse/tables/test/blocks/3_16275396704443078712_13742690842691968439', watcher=None) Received response(xid=733): [] Sending request(xid=734): Delete(path='/clickhouse/tables/test/blocks/3_16275396704443078712_13742690842691968439', version=-1) Received response(xid=734): True Sending request(xid=735): GetChildren(path='/clickhouse/tables/test/blocks/3_2034507573975896007_4327225584940923981', watcher=None) Received response(xid=735): [] Sending request(xid=736): Delete(path='/clickhouse/tables/test/blocks/3_2034507573975896007_4327225584940923981', version=-1) Received response(xid=736): True Sending request(xid=737): Delete(path='/clickhouse/tables/test/blocks', version=-1) Received response(xid=737): True Sending request(xid=738): Delete(path='/clickhouse/tables/test', version=-1) Received response(xid=738): True Sending request(xid=739): Exists(path='/clickhouse/tables/test', watcher=None) Executing query SYSTEM RESTART REPLICA test on replica1 Executing query SELECT count() FROM system.parts WHERE table = 'test_read_new_format' and active on node http://localhost:None "GET /v1.46/containers/798fe6bf1923a68ca2308fbb6b084251bbf96f2917b1fc3f98c6cb736f801c24/json HTTP/1.1" 200 None Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node http://localhost:None "GET /v1.46/containers/798fe6bf1923a68ca2308fbb6b084251bbf96f2917b1fc3f98c6cb736f801c24/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/798fe6bf1923a68ca2308fbb6b084251bbf96f2917b1fc3f98c6cb736f801c24/json HTTP/1.1" 200 None Executing query DETACH TABLE postgres_database.test_table on node1 http://localhost:None "GET /v1.46/containers/798fe6bf1923a68ca2308fbb6b084251bbf96f2917b1fc3f98c6cb736f801c24/json HTTP/1.1" 200 None ClickHouse node2 started Executing query create database re engine = Replicated('/test/re', 'shard1', '{replica}'); on node1 Executing query ALTER TABLE test_read_new_format ATTACH PART 'all_1_1_0' on node Executing query INSERT INTO test SELECT number AS num FROM numbers(1000,2000) WHERE num % 2 = 0 on replica1 Executing query SELECT name FROM system.parts where name = 'all_1_1_3' and table = 'table_for_recompression' on node2 Executing query SHOW TABLES FROM postgres_database on node1 Executing query SYSTEM RESTORE REPLICA test on replica1 Executing query create database re engine = Replicated('/test/re', 'shard1', '{replica}'); on node2 Executing query SELECT count() FROM system.parts WHERE table = 'test_read_new_format' and active on node Executing query ATTACH TABLE postgres_database.test_table on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SHOW TABLES FROM postgres_database on node1 Executing query SELECT * FROM test_read_new_format on node Sending request(xid=740): Exists(path='/clickhouse/tables/test', watcher=None) Received response(xid=740): ZnodeStat(czxid=3823, mzxid=3823, ctime=1743564002064, mtime=1743564002064, version=0, cversion=28, aversion=0, ephemeralOwner=0, dataLength=0, numChildren=16, pzxid=3866) Executing query SELECT sum(n), count() FROM test on replica1 Stdout:8 Executing query SELECT name FROM system.parts where name = 'all_1_1_3' and table = 'table_for_recompression' on node2 Executing query DROP DATABASE postgres_database on node1 Executing query SELECT name FROM system.parts WHERE table = 'test_read_new_format' and active LIMIT 1 on node Executing query SELECT sum(n), count() FROM test on replica2 Executing query GRANT SELECT ON table2 TO rre on instance Executing query SHOW DATABASES on node1 Executing query SELECT path FROM system.parts WHERE table = 'test_read_new_format' and name = 'all_2_2_0' on node Executing query SELECT sum(n), count() FROM test on replica3 Executing query SELECT * FROM table1 on instance Executing query SELECT remote_path FROM system.remote_data_paths WHERE concat(path, local_path) = '/var/lib/clickhouse/disks/s3/store/02f/02fed35a-6a76-4984-a49e-a48ecf03f3c3/all_2_2_0/primary.cidx' on node [gw0] PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl test_postgresql_database_engine/test.py::test_postgresql_database_with_schema Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL('postgres1:5432', 'postgres_database', 'postgres', 'mysecretpassword', 'test_schema') on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT name FROM system.parts where name = 'all_1_1_3' and table = 'table_for_recompression' on node2 Executing query INSERT INTO test SELECT number + 1000 FROM numbers(1000) on replica1 Stdout:8 Executing query SELECT * FROM table2 on instance Executing query SHOW TABLES FROM postgres_database on node1 Executing query CREATE TABLE test_replicated_merge_tree ( id Int64, val String ) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_replicated_merge_tree_s3', '{replica}') PARTITION BY id ORDER BY (id, val) SETTINGS storage_policy='s3', allow_remote_fs_zero_copy_replication='0' on node [gw3] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0] [gw3] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0] test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1] Executing query DROP ROLE rre on instance Executing query INSERT INTO postgres_database.table1 SELECT number from numbers(10000) on node1 Executing query SYSTEM RESTART REPLICA test on replica2 Executing query CREATE TABLE test_replicated_merge_tree ( id Int64, val String ) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_replicated_merge_tree_s3', '{replica}') PARTITION BY id ORDER BY (id, val) SETTINGS storage_policy='s3', allow_remote_fs_zero_copy_replication='0' on new_node Executing query DROP USER ure on instance Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Executing query SELECT count() FROM postgres_database.table1 on node1 GatewayClient.address is deprecated and will be removed in version 1.0. Use GatewayParameters instead. Executing query SYSTEM RESTORE REPLICA test on replica2 Command to send: A 733deeff777deb278f34b5dfefbe405c8ff276f6265a9a5be9a1ba7509665941 Executing query SELECT name FROM system.parts where name = 'all_1_1_3' and table = 'table_for_recompression' on node2 Answer received: !yv Command to send: j i rj org.apache.spark.SparkConf e Answer received: !yv Command to send: j i rj org.apache.spark.api.java.* e Answer received: !yv Command to send: j i rj org.apache.spark.api.python.* e Answer received: !yv Command to send: j i rj org.apache.spark.ml.python.* e Answer received: !yv Command to send: j i rj org.apache.spark.mllib.api.python.* e Answer received: !yv Command to send: j i rj org.apache.spark.resource.* e Answer received: !yv Command to send: j i rj org.apache.spark.sql.* e Answer received: !yv Command to send: j i rj org.apache.spark.sql.api.python.* e Answer received: !yv Command to send: j i rj org.apache.spark.sql.hive.* e Answer received: !yv Command to send: j i rj scala.Tuple2 e Answer received: !yv Command to send: r u SparkConf rj e Answer received: !ycorg.apache.spark.SparkConf Command to send: i org.apache.spark.SparkConf bTrue e Answer received: !yro0 Command to send: c o0 set sspark.app.name sspark_test e Answer received: !yro1 Command to send: c o0 set sspark.master slocal e Answer received: !yro2 Command to send: c o0 contains sspark.serializer.objectStreamReset e Answer received: !ybfalse Command to send: c o0 set sspark.serializer.objectStreamReset s100 e Answer received: !yro3 Command to send: c o0 contains sspark.rdd.compress e Answer received: !ybfalse Command to send: c o0 set sspark.rdd.compress sTrue e Answer received: !yro4 Command to send: c o0 contains sspark.master e Answer received: !ybtrue Command to send: c o0 contains sspark.app.name e Answer received: !ybtrue Command to send: c o0 contains sspark.master e Answer received: !ybtrue Command to send: c o0 get sspark.master e Answer received: !yslocal Command to send: c o0 contains sspark.app.name e Answer received: !ybtrue Command to send: c o0 get sspark.app.name e Answer received: !ysspark_test Command to send: c o0 contains sspark.home e Answer received: !ybfalse Command to send: c o0 getAll e Answer received: !yto5 Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i0 e Answer received: !yro6 Command to send: c o6 _1 e Answer received: !ysspark.master Command to send: c o6 _2 e Answer received: !yslocal Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i1 e Answer received: !yro7 Command to send: c o7 _1 e Answer received: !ysspark.app.name Command to send: c o7 _2 e Answer received: !ysspark_test Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i2 e Answer received: !yro8 Command to send: c o8 _1 e Answer received: !ysspark.rdd.compress Command to send: c o8 _2 e Answer received: !ysTrue Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i3 e Answer received: !yro9 Command to send: c o9 _1 e Answer received: !ysspark.serializer.objectStreamReset Command to send: c o9 _2 e Answer received: !ys100 Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i4 e Answer received: !yro10 Command to send: c o10 _1 e Answer received: !ysspark.submit.pyFiles Command to send: c o10 _2 e Answer received: !ys Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i5 e Answer received: !yro11 Command to send: c o11 _1 e Answer received: !ysspark.submit.deployMode Command to send: c o11 _2 e Answer received: !ysclient Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i6 e Answer received: !yro12 Command to send: c o12 _1 e Answer received: !ysspark.app.submitTime Command to send: c o12 _2 e Answer received: !ys1743564004110 Command to send: a e o5 e Answer received: !yi8 Command to send: a g o5 i7 e Answer received: !yro13 Command to send: c o13 _1 e Answer received: !ysspark.ui.showConsoleProgress Command to send: c o13 _2 e Answer received: !ystrue Command to send: a e o5 e Answer received: !yi8 Command to send: r u JavaSparkContext rj e Answer received: !ycorg.apache.spark.api.java.JavaSparkContext Command to send: i org.apache.spark.api.java.JavaSparkContext ro0 e Executing query DROP TABLE table1 on instance Executing query INSERT INTO test_replicated_merge_tree VALUES (0, 'a') on node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query DROP TABLE table2 on instance Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Stopping Stderr: Container roottestprometheusprotocols-gw2-node-1 Stopping Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Stopping Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Stopped Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Stopped Stderr: Container roottestprometheusprotocols-gw2-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/.env --project-name roottestprometheusprotocols-gw2 --file /ClickHouse/tests/integration/test_prometheus_protocols/_instances-0-gw2/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_prometheus.yml down --volumes] Executing query DETACH TABLE postgres_database.table1 on node1 Executing query INSERT INTO test_replicated_merge_tree VALUES (1, 'b') on new_node Executing query SYSTEM RESTART REPLICA test on replica3 Executing query DROP USER IF EXISTS A, B on instance [gw1] PASSED test_role/test.py::test_role_expiration[False] Executing query DROP ROLE IF EXISTS R1, R2, R3, R4 on instance Executing query ATTACH TABLE postgres_database.table1 on node1 Command to send: A 733deeff777deb278f34b5dfefbe405c8ff276f6265a9a5be9a1ba7509665941 Answer received: !yv Command to send: m d o1 e Answer received: !yv Command to send: m d o2 e Answer received: !yv Command to send: m d o3 e Answer received: !yv Command to send: m d o4 e Answer received: !yv Command to send: m d o6 e Answer received: !yv Command to send: m d o7 e Executing query SELECT name FROM system.parts where name = 'all_1_1_3' and table = 'table_for_recompression' on node2 Answer received: !yv Command to send: m d o8 e Answer received: !yv Command to send: m d o5 e Answer received: !yv Executing query SYSTEM RESTORE REPLICA test on replica3 Executing query SYSTEM SYNC REPLICA test_replicated_merge_tree on node Executing query CREATE ROLE rre on instance test_role/test.py::test_role_expiration[True] Executing query SELECT count() FROM postgres_database.table1 on node1 Executing query SELECT default_compression_codec FROM system.parts where name = 'all_1_1_3' and table = 'table_for_recompression' on node2 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SYSTEM SYNC REPLICA test_replicated_merge_tree on new_node Stdout:8 Executing query CREATE USER ure DEFAULT ROLE rre on instance Executing query SYSTEM SYNC REPLICA test on replica2 Executing query DROP DATABASE postgres_database on node1 Executing query SELECT count() FROM test_replicated_merge_tree on node Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Stderr: Container roottestprometheusprotocols-gw2-node-1 Stopping Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Stopping Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Stopping Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Stopped Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Removing Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Stopped Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Removing Stderr: Container roottestprometheusprotocols-gw2-node-1 Stopped Stderr: Container roottestprometheusprotocols-gw2-node-1 Removing Stderr: Container roottestprometheusprotocols-gw2-node-1 Removed Stderr: Container roottestprometheusprotocols-gw2-prometheus_writer-1 Removed Stderr: Container roottestprometheusprotocols-gw2-prometheus_reader-1 Removed Stderr: Network roottestprometheusprotocols-gw2_default Removing Stderr: Network roottestprometheusprotocols-gw2_default Removed Cleanup called Executing query CREATE TABLE table1 (id Int) Engine=Log on instance Docker networks for project roottestprometheusprotocols-gw2 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestprometheusprotocols-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestprometheusprotocols-gw2 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestprometheusprotocols-gw2-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query SYSTEM SYNC REPLICA test on replica3 [gw0] PASSED test_postgresql_database_engine/test.py::test_postgresql_database_with_schema Unstopped containers: {} test_postgresql_database_engine/test.py::test_postgresql_fetch_tables No running containers for project: roottestprometheusprotocols-gw2 Trying to prune unused networks... Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL('postgres1:5432', 'postgres_database', 'postgres', 'mysecretpassword') on node1 Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:6 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 6 Executing query SELECT count() FROM test_replicated_merge_tree on new_node test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop Running tests in /ClickHouse/tests/integration/test_rocksdb_read_only/test.py Cluster start called. is_up=False Docker networks for project roottestrocksdbreadonly-gw2 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrocksdbreadonly-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query CREATE TABLE table2 (id Int) Engine=Log on instance Docker volumes for project roottestrocksdbreadonly-gw2 are DRIVER VOLUME NAME Cleanup called Executing query SELECT sum(n), count() FROM test on replica1 Docker networks for project roottestrocksdbreadonly-gw2 are NETWORK ID NAME DRIVER SCOPE Executing query SHOW TABLES FROM postgres_database on node1 Docker containers for project roottestrocksdbreadonly-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrocksdbreadonly-gw2 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrocksdbreadonly-gw2-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrocksdbreadonly-gw2 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:6 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 6 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_rocksdb_read_only/configs/rocksdb.xml'] to /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/database Setup logs dir /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/.env Answer received: !yro14 Command to send: c o14 sc e Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] Answer received: !yro15 Command to send: c o15 conf e No config file found Executing query INSERT INTO table1 VALUES (1) on instance Executing query DROP TABLE IF EXISTS test_replicated_merge_tree SYNC on node Answer received: !yro16 Command to send: r u PythonAccumulatorV2 rj e Answer received: !ycorg.apache.spark.api.python.PythonAccumulatorV2 Command to send: i org.apache.spark.api.python.PythonAccumulatorV2 s127.0.0.1 i58529 s733deeff777deb278f34b5dfefbe405c8ff276f6265a9a5be9a1ba7509665941 e Answer received: !yro17 Command to send: c o14 sc e Answer received: !yro18 Command to send: c o18 register ro17 e Answer received: !yv Command to send: r u PythonUtils rj e http://localhost:None "GET /version HTTP/1.1" 200 826 Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils isEncryptionEnabled e Answer received: !ym Command to send: c z:org.apache.spark.api.python.PythonUtils isEncryptionEnabled ro14 e Command:[docker compose --env-file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/.env --project-name roottestrocksdbreadonly-gw2 --file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/docker-compose.yml pull] Answer received: !ybfalse Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout e Answer received: !ym Command to send: c z:org.apache.spark.api.python.PythonUtils getPythonAuthSocketTimeout ro14 e Answer received: !yL15 Command to send: r u PythonUtils rj e Answer received: !ycorg.apache.spark.api.python.PythonUtils Command to send: r m org.apache.spark.api.python.PythonUtils getSparkBufferSize e Answer received: !ym Command to send: c z:org.apache.spark.api.python.PythonUtils getSparkBufferSize ro14 e Answer received: !yi65536 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.SparkFiles rj e Answer received: !ycorg.apache.spark.SparkFiles Command to send: r m org.apache.spark.SparkFiles getRootDirectory e Answer received: !ym Command to send: c z:org.apache.spark.SparkFiles getRootDirectory e Answer received: !ys/tmp/spark-4d08e6a8-c3f5-4129-bab7-a3bb53495139/userFiles-0db6cfd3-49ed-41ee-ad41-9371ad8e7148 Command to send: c o16 get sspark.submit.pyFiles s e Answer received: !ys Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.util rj e Answer received: !yp Command to send: r u org.apache.spark.util.Utils rj e Answer received: !ycorg.apache.spark.util.Utils Command to send: r m org.apache.spark.util.Utils getLocalDir e Answer received: !ym Command to send: c o14 sc e Answer received: !yro19 Command to send: c o19 conf e Answer received: !yro20 Command to send: c z:org.apache.spark.util.Utils getLocalDir ro20 e Answer received: !ys/tmp/spark-4d08e6a8-c3f5-4129-bab7-a3bb53495139 Command to send: r u org rj e Answer received: !yp Command to send: r u org.apache rj e Answer received: !yp Command to send: r u org.apache.spark rj e Answer received: !yp Command to send: r u org.apache.spark.util rj e Answer received: !yp Command to send: r u org.apache.spark.util.Utils rj e Answer received: !ycorg.apache.spark.util.Utils Command to send: r m org.apache.spark.util.Utils createTempDir e Answer received: !ym Command to send: c z:org.apache.spark.util.Utils createTempDir s/tmp/spark-4d08e6a8-c3f5-4129-bab7-a3bb53495139 spyspark e Answer received: !yro21 Command to send: c o21 getAbsolutePath e Answer received: !ys/tmp/spark-4d08e6a8-c3f5-4129-bab7-a3bb53495139/pyspark-f7d4cef4-43c1-44e0-9154-9a8f9978c9fd Command to send: c o16 get sspark.python.profile sfalse e Answer received: !ysfalse Command to send: r u SparkSession rj e Executing query SELECT sum(n), count() FROM test on replica2 run container_id:roottestpostgresqldatabaseengine-gw0-node1-1 detach:False nothrow:False cmd: ['bash', '-c', '[ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "PostgreSQL table table1 does not exist" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true'] Command:[docker exec roottestpostgresqldatabaseengine-gw0-node1-1 bash -c [ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "PostgreSQL table table1 does not exist" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true] Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession getDefaultSession e run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession getDefaultSession e Answer received: !yro22 Command to send: c o22 isDefined e Answer received: !ybfalse Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: c o14 sc e Answer received: !yro23 Command to send: i java.util.HashMap e Answer received: !yao24 Command to send: c o24 put sspark.app.name sspark_test e Answer received: !yn Command to send: c o24 put sspark.master slocal e Answer received: !yn Command to send: i org.apache.spark.sql.SparkSession ro23 ro24 e [gw0] PASSED test_postgresql_database_engine/test.py::test_postgresql_fetch_tables test_postgresql_database_engine/test.py::test_postgresql_password_leak Executing query DROP DATABASE IF EXISTS postgres_database on node1 Stdout:8 Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Answer received: !yro25 Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession setDefaultSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession setDefaultSession ro25 e Answer received: !yv Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession setActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession setActiveSession ro25 e Answer received: !yv Command to send: c o14 stop e Executing query INSERT INTO table2 VALUES (2) on instance Answer received: !yv Executing query DROP TABLE IF EXISTS test_replicated_merge_tree SYNC on new_node Executing query SELECT sum(n), count() FROM test on replica3 Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL('postgres1:5432', 'postgres_database', 'postgres', 'mysecretpassword', 'test_schema') on node1 Executing query GRANT SELECT ON table1 TO rre on instance Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession clearDefaultSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession clearDefaultSession e Answer received: !yv Command to send: r u SparkSession rj e Answer received: !ycorg.apache.spark.sql.SparkSession Command to send: r m org.apache.spark.sql.SparkSession clearActiveSession e Answer received: !ym Command to send: c z:org.apache.spark.sql.SparkSession clearActiveSession e Answer received: !yv clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottests3accessheaders-gw9. Added instance name:node1 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/.env', '--project-name', 'roottests3accessheaders-gw9', '--file', '/ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ Starting cluster... Running tests in /ClickHouse/tests/integration/test_s3_access_headers/test.py Cluster start called. is_up=False Docker networks for project roottests3accessheaders-gw9 are NETWORK ID NAME DRIVER SCOPE Executing query SYSTEM RESTORE REPLICA test on replica1 Docker containers for project roottests3accessheaders-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottests3accessheaders-gw9 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottests3accessheaders-gw9 are NETWORK ID NAME DRIVER SCOPE Executing query DROP DATABASE IF EXISTS postgres_database2 on node1 Docker containers for project roottests3accessheaders-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottests3accessheaders-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottests3accessheaders-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] [gw3] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1] test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2] Executing query CREATE TABLE test_replicated_merge_tree ( id Int64, val String ) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_replicated_merge_tree_s3_zero_copy', '{replica}') PARTITION BY id ORDER BY (id, val) SETTINGS storage_policy='s3', allow_remote_fs_zero_copy_replication='1' on node Unstopped containers: {} No running containers for project: roottests3accessheaders-gw9 Trying to prune unused networks... Command to send: m d o24 e Answer received: !yv Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Executing query SELECT * FROM table1 on instance Stdout:6 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 6 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_s3_access_headers/configs/config.d/named_collections.xml', '/ClickHouse/tests/integration/test_s3_access_headers/configs/config.d/s3_headers.xml'] to /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/database Setup logs dir /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'MINIO_CERTS_DIR': '/ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/minio/certs', 'MINIO_DATA_DIR': '/ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/minio/data', 'MINIO_PORT': '9001', 'SSL_CERT_FILE': '/ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/minio/certs/public.crt', 'RESOLVER_LOGS': '/ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/resolver', 'RESOLVER_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Executing query SYSTEM RESTORE REPLICA test on replica2 http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/.env --project-name roottests3accessheaders-gw9 --file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml pull] Executing query CREATE DATABASE postgres_database2 ENGINE = PostgreSQL('postgres1:5432', 'postgres_database', 'postgres', 'mysecretpassword') on node1 Executing query CREATE TABLE test_replicated_merge_tree ( id Int64, val String ) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_replicated_merge_tree_s3_zero_copy', '{replica}') PARTITION BY id ORDER BY (id, val) SETTINGS storage_policy='s3', allow_remote_fs_zero_copy_replication='1' on new_node Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query SELECT * FROM table2 on instance Executing query SYSTEM RESTORE REPLICA test on replica3 Executing query SHOW CREATE postgres_database.table1 on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query INSERT INTO test_replicated_merge_tree VALUES (0, 'a') on node No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 [gw8] PASSED test_restore_replica/test.py::test_restore_replica_sequential Command:[docker compose --env-file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/.env --project-name roottestrestorereplica-gw8 --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/docker-compose.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/docker-compose.yml stop --timeout 20] http://localhost:None "POST /v1.46/exec/7ee379e6761ed7455f95561b3bb369251e7ad5e45d804d91dbd388b631afe33d/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/7ee379e6761ed7455f95561b3bb369251e7ad5e45d804d91dbd388b631afe33d/json HTTP/1.1" 200 586 Executing query SHOW CREATE postgres_database2.table2 on node1 Executing query INSERT INTO test_replicated_merge_tree VALUES (1, 'b') on new_node Command to send: m d o9 e Answer received: !yv Command to send: m d o0 e Answer received: !yv Command to send: m d o10 e Answer received: !yv Command to send: m d o11 e Answer received: !yv Command to send: m d o12 e Answer received: !yv Command to send: m d o13 e Answer received: !yv Command to send: m d o15 e Answer received: !yv Command to send: m d o18 e Answer received: !yv Command to send: m d o19 e Answer received: !yv Command to send: m d o20 e Answer received: !yv Command to send: m d o21 e Answer received: !yv Command to send: m d o22 e Answer received: !yv Command to send: m d o23 e Answer received: !yv Executing query DROP DATABASE postgres_database on node1 Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query SYSTEM SYNC REPLICA test_replicated_merge_tree on node Executing query DROP DATABASE postgres_database2 on node1 Executing query SYSTEM SYNC REPLICA test_replicated_merge_tree on new_node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] [gw0] PASSED test_postgresql_database_engine/test.py::test_postgresql_password_leak test_postgresql_database_engine/test.py::test_predefined_connection_configuration Executing query DROP DATABASE IF EXISTS postgres_database on node1 Stdout:785 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:785 Executing query select 20 on node1 Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query SELECT count() FROM test_replicated_merge_tree on node Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL(postgres1) on node1 Executing query select create_table_query from system.tables where database ='postgres_database' on node1 Executing query SELECT count() FROM test_replicated_merge_tree on new_node Executing query select 20 on node1 Executing query INSERT INTO postgres_database.test_table SELECT number, number from numbers(100) on node1 Executing query SELECT uuid FROM system.tables WHERE name = 'test_replicated_merge_tree' on node Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query SELECT count() FROM postgres_database.test_table on node1 Executing query SELECT remote_path FROM system.remote_data_paths WHERE local_path LIKE '%4b9d30ca-2273-4db1-a542-c2e4970e1c9a%' AND local_path NOT LIKE '%format_version.txt%' ORDER BY ALL on node Executing query select 20 on node1 Executing query DROP DATABASE IF EXISTS postgres_database on node1 Executing query SELECT uuid FROM system.tables WHERE name = 'test_replicated_merge_tree' on new_node Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL(postgres1, schema='test_schema') on node1 Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query select 20 on node1 Executing query INSERT INTO postgres_database.test_table SELECT number from numbers(200) on node1 Executing query SELECT remote_path FROM system.remote_data_paths WHERE local_path LIKE '%7591e889-7356-4b3a-b9ac-8e1c0546641f%' AND local_path NOT LIKE '%format_version.txt%' ORDER BY ALL on new_node Stderr: Container roottests3cluster-gw5-s0_1_0-1 Stopping Stderr: Container roottests3cluster-gw5-resolver-1 Stopping Stderr: Container roottests3cluster-gw5-s0_0_1-1 Stopping Stderr: Container roottests3cluster-gw5-s0_0_0-1 Stopping Stderr: Container roottests3cluster-gw5-s0_1_0-1 Stopped Stderr: Container roottests3cluster-gw5-s0_0_0-1 Stopped Stderr: Container roottests3cluster-gw5-minio1-1 Stopping Stderr: Container roottests3cluster-gw5-s0_0_1-1 Stopped Stderr: Container roottests3cluster-gw5-zoo2-1 Stopping Stderr: Container roottests3cluster-gw5-zoo3-1 Stopping Stderr: Container roottests3cluster-gw5-zoo1-1 Stopping Stderr: Container roottests3cluster-gw5-minio1-1 Stopped Stderr: Container roottests3cluster-gw5-zoo3-1 Stopped Stderr: Container roottests3cluster-gw5-zoo2-1 Stopped Stderr: Container roottests3cluster-gw5-zoo1-1 Stopped Stderr: Container roottests3cluster-gw5-resolver-1 Stopped Stderr: Container roottests3cluster-gw5-proxy1-1 Stopping Stderr: Container roottests3cluster-gw5-proxy2-1 Stopping Stderr: Container roottests3cluster-gw5-proxy1-1 Stopped Stderr: Container roottests3cluster-gw5-proxy2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/.env --project-name roottests3cluster-gw5 --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_0/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_0_1/docker-compose.yml --file /ClickHouse/tests/integration/test_s3_cluster/_instances-0-gw5/s0_1_0/docker-compose.yml down --volumes] Executing query select '===test_refresh_vs_shutdown_smoke start===' on node1 Executing query SELECT count() FROM postgres_database.test_table on node1 Executing query SELECT name FROM system.parts WHERE table = 'test_replicated_merge_tree' AND active ORDER BY ALL on node Executing query create materialized view re.a0 refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select number*10 as x from numbers(2) on node1 Executing query DROP DATABASE IF EXISTS postgres_database on node1 Executing query SELECT value FROM system.zookeeper WHERE path='/clickhouse/tables/test_replicated_merge_tree_s3_zero_copy' and name='table_shared_id' on node Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL(postgres1, 'test_schema') on node1 Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query SELECT name FROM system.zookeeper WHERE path='/clickhouse/zero_copy/zero_copy_s3/4b9d30ca-2273-4db1-a542-c2e4970e1c9a/0_0_0_0' ORDER BY ALL on node Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL(postgres2) on node1 Executing query create materialized view re.a1 refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select number*10 as x from numbers(2) on node1 Stderr: Container roottests3cluster-gw5-s0_0_1-1 Stopping Stderr: Container roottests3cluster-gw5-s0_1_0-1 Stopping Stderr: Container roottests3cluster-gw5-s0_0_0-1 Stopping Stderr: Container roottests3cluster-gw5-resolver-1 Stopping Stderr: Container roottests3cluster-gw5-s0_0_1-1 Stopped Stderr: Container roottests3cluster-gw5-s0_0_1-1 Removing Stderr: Container roottests3cluster-gw5-resolver-1 Stopped Stderr: Container roottests3cluster-gw5-resolver-1 Removing Stderr: Container roottests3cluster-gw5-s0_0_0-1 Stopped Stderr: Container roottests3cluster-gw5-s0_0_0-1 Removing Stderr: Container roottests3cluster-gw5-s0_1_0-1 Stopped Stderr: Container roottests3cluster-gw5-s0_1_0-1 Removing Stderr: Container roottests3cluster-gw5-s0_0_1-1 Removed Stderr: Container roottests3cluster-gw5-resolver-1 Removed Stderr: Container roottests3cluster-gw5-s0_0_0-1 Removed Stderr: Container roottests3cluster-gw5-minio1-1 Stopping Stderr: Container roottests3cluster-gw5-s0_1_0-1 Removed Stderr: Container roottests3cluster-gw5-zoo2-1 Stopping Stderr: Container roottests3cluster-gw5-zoo3-1 Stopping Stderr: Container roottests3cluster-gw5-zoo1-1 Stopping Stderr: Container roottests3cluster-gw5-zoo2-1 Stopped Stderr: Container roottests3cluster-gw5-zoo2-1 Removing Stderr: Container roottests3cluster-gw5-zoo1-1 Stopped Stderr: Container roottests3cluster-gw5-zoo1-1 Removing Stderr: Container roottests3cluster-gw5-zoo3-1 Stopped Stderr: Container roottests3cluster-gw5-zoo3-1 Removing Stderr: Container roottests3cluster-gw5-minio1-1 Stopped Stderr: Container roottests3cluster-gw5-minio1-1 Removing Stderr: Container roottests3cluster-gw5-zoo2-1 Removed Stderr: Container roottests3cluster-gw5-zoo1-1 Removed Stderr: Container roottests3cluster-gw5-zoo3-1 Removed Stderr: Container roottests3cluster-gw5-minio1-1 Removed Stderr: Container roottests3cluster-gw5-proxy2-1 Stopping Stderr: Container roottests3cluster-gw5-proxy1-1 Stopping Stderr: Container roottests3cluster-gw5-proxy1-1 Stopped Stderr: Container roottests3cluster-gw5-proxy1-1 Removing Stderr: Container roottests3cluster-gw5-proxy2-1 Stopped Stderr: Container roottests3cluster-gw5-proxy2-1 Removing Stderr: Container roottests3cluster-gw5-proxy2-1 Removed Stderr: Container roottests3cluster-gw5-proxy1-1 Removed Stderr: Volume roottests3cluster-gw5_data1-1 Removing Stderr: Network roottests3cluster-gw5_default Removing Stderr: Volume roottests3cluster-gw5_data1-1 Removed Stderr: Network roottests3cluster-gw5_default Removed Cleanup called Docker networks for project roottests3cluster-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottests3cluster-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query SELECT name FROM system.zookeeper WHERE path='/clickhouse/zero_copy/zero_copy_s3/4b9d30ca-2273-4db1-a542-c2e4970e1c9a/0_0_0_0/old-style-prefix_with-several-section_opn_kvptbbnignrdxrhilwgtjooxpiith' ORDER BY ALL on node Docker volumes for project roottests3cluster-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottests3cluster-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottests3cluster-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:4 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 4 test_restart_server/test.py::test_drop_memory_database Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL(unknown_collection) on node1 Running tests in /ClickHouse/tests/integration/test_restart_server/test.py Cluster start called. is_up=False Docker networks for project roottestrestartserver-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrestartserver-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrestartserver-gw5 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestrestartserver-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrestartserver-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query SELECT name FROM system.zookeeper WHERE path='/clickhouse/zero_copy/zero_copy_s3/4b9d30ca-2273-4db1-a542-c2e4970e1c9a/1_0_0_0' ORDER BY ALL on node Docker volumes for project roottestrestartserver-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrestartserver-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrestartserver-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Stdout:4 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 4 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/database Setup logs dir /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/.env --project-name roottestrestartserver-gw5 --file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/docker-compose.yml pull] Executing query CREATE DATABASE postgres_database ENGINE = PostgreSQL(postgres3, port=5432) on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Executing query CREATE ROLE extra_role on instance Stdout: PID TTY TIME CMD Stdout: 785 ? 00:00:02 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT name FROM system.zookeeper WHERE path='/clickhouse/zero_copy/zero_copy_s3/4b9d30ca-2273-4db1-a542-c2e4970e1c9a/1_0_0_0/old-style-prefix_with-several-section_ctm_ucqbuauexwxxvivrgmdnkkagrmskg' ORDER BY ALL on node Stdout:785 Executing query CREATE USER extra_user DEFAULT ROLE extra_role on instance Executing query SELECT count() FROM postgres_database.test_table on node1 Executing query DROP TABLE IF EXISTS test_replicated_merge_tree SYNC on node Executing query GRANT SELECT ON table1 TO extra_role on instance Executing query DROP DATABASE postgres_database; CREATE DATABASE postgres_database ENGINE = PostgreSQL(postgres1, use_table_cache=1); on node1 Executing query SELECT * FROM table1 on instance Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query DROP TABLE IF EXISTS test_replicated_merge_tree SYNC on new_node Executing query SELECT count() FROM postgres_database.test_table on node1 Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query GRANT SELECT ON table2 TO rre on instance run container_id:roottestpostgresqldatabaseengine-gw0-node1-1 detach:False nothrow:False cmd: ['bash', '-c', '[ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "Cached table `test_table`" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true'] Command:[docker exec roottestpostgresqldatabaseengine-gw0-node1-1 bash -c [ -f /var/log/clickhouse-server/clickhouse-server.log ] && zgrep -aH "Cached table `test_table`" /var/log/clickhouse-server/clickhouse-server.log | ( [ -z "" ] && cat || grep -v "$" ) || true] Stdout:/var/log/clickhouse-server/clickhouse-server.log:2025.04.02 03:19:51.033953 [ 9 ] {cf05e448-334d-47ef-aa9f-81db2af89005} DatabasePostgreSQL(postgres_database): Cached table `test_table` Stdout:/var/log/clickhouse-server/clickhouse-server.log:2025.04.02 03:19:54.084816 [ 9 ] {140e56d0-7185-4937-b5c0-f049b1b55ea0} DatabasePostgreSQL(postgres_database): Cached table `test_table` Stdout:/var/log/clickhouse-server/clickhouse-server.log:2025.04.02 03:19:55.101122 [ 9 ] {4509ebab-464f-442e-84e1-8663b285f7ab} DatabasePostgreSQL(postgres_database): Cached table `test_table` Stdout:/var/log/clickhouse-server/clickhouse-server.log:2025.04.02 03:19:57.622597 [ 9 ] {58bcdb9a-5b47-4561-bf62-5793761649ea} DatabasePostgreSQL(postgres_database): Cached table `test_table` Stdout:/var/log/clickhouse-server/clickhouse-server.log:2025.04.02 03:20:13.850498 [ 9 ] {fbca4003-38d6-420c-a806-27489358192f} DatabasePostgreSQL(postgres_database): Cached table `test_table` Stderr:bash: line 1: test_table: command not found Executing query DROP DATABASE postgres_database on node1 Executing query CREATE TABLE test_replicated_merge_tree ( id Int64, val String ) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_replicated_merge_tree_s3_template_key', '{replica}') PARTITION BY id ORDER BY (id, val) SETTINGS storage_policy='s3_template_key', allow_remote_fs_zero_copy_replication='0' on node [gw3] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2] test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3] Executing query SELECT * FROM table1 on instance run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] [gw0] PASSED test_postgresql_database_engine/test.py::test_predefined_connection_configuration Command:[docker compose --env-file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/.env --project-name roottestpostgresqldatabaseengine-gw0 --file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml stop --timeout 20] Stdout:785 Executing query CREATE TABLE test_replicated_merge_tree ( id Int64, val String ) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_replicated_merge_tree_s3_template_key', '{replica}') PARTITION BY id ORDER BY (id, val) SETTINGS storage_policy='s3_template_key', allow_remote_fs_zero_copy_replication='0' on new_node Executing query SELECT * FROM table2 on instance Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:785 Executing query DROP ROLE rre on instance Executing query INSERT INTO test_replicated_merge_tree VALUES (0, 'a') on node Executing query DROP USER ure on instance Executing query DROP TABLE table1 on instance Executing query INSERT INTO test_replicated_merge_tree VALUES (1, 'b') on new_node Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query DROP TABLE table2 on instance Executing query SYSTEM SYNC REPLICA test_replicated_merge_tree on node Executing query DROP ROLE extra_role on instance Executing query SYSTEM SYNC REPLICA test_replicated_merge_tree on new_node Executing query DROP USER extra_user on instance Executing query SELECT count() FROM test_replicated_merge_tree on node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:785 Executing query DROP USER IF EXISTS A, B on instance [gw1] PASSED test_role/test.py::test_role_expiration[True] Executing query SELECT count() FROM test_replicated_merge_tree on new_node Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query DROP ROLE IF EXISTS R1, R2, R3, R4 on instance Executing query DROP TABLE IF EXISTS test_replicated_merge_tree SYNC on node Executing query CREATE USER A, B, C on instance test_role/test.py::test_roles_cache Executing query DROP TABLE IF EXISTS test_replicated_merge_tree SYNC on new_node Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/.env --project-name roottestrestartserver-gw5 --file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/.env --project-name roottestrestartserver-gw5 --file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/docker-compose.yml up -d --no-recreate] Executing query CREATE TABLE tbl (x1 Int64, x2 Int64, x3 Int64, x4 Int64, x5 Int64, x6 Int64, x7 Int64, x8 Int64, x9 Int64, x10 Int64) ENGINE=MergeTree ORDER BY tuple() on instance Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:785 Executing query INSERT INTO tbl VALUES (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) on instance Stderr: proxy2 Skipped - Image is already being pulled by proxy1 Stderr: node1 Pulling Stderr: minio1 Pulling Stderr: resolver Pulling Stderr: proxy1 Pulling Stderr: proxy1 Pulled Stderr: resolver Pulled Stderr: node1 Pulled Stderr: minio1 Pulled Trying to create Minio instance by command docker compose --project-name roottests3accessheaders-gw9 --env-file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d Command:[docker compose --project-name roottests3accessheaders-gw9 --env-file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d] Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/.env --project-name roottestrocksdbreadonly-gw2 --file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/.env --project-name roottestrocksdbreadonly-gw2 --file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/docker-compose.yml up -d --no-recreate] [gw3] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3] test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4] Executing query CREATE TABLE test_replicated_merge_tree ( id Int64, val String ) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_replicated_merge_tree_s3_template_key_zero_copy', '{replica}') PARTITION BY id ORDER BY (id, val) SETTINGS storage_policy='s3_template_key', allow_remote_fs_zero_copy_replication='1' on node Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query CREATE TABLE test_replicated_merge_tree ( id Int64, val String ) ENGINE=ReplicatedMergeTree('/clickhouse/tables/test_replicated_merge_tree_s3_template_key_zero_copy', '{replica}') PARTITION BY id ORDER BY (id, val) SETTINGS storage_policy='s3_template_key', allow_remote_fs_zero_copy_replication='1' on new_node Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query INSERT INTO test_replicated_merge_tree VALUES (0, 'a') on node Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Stderr: Network roottestrocksdbreadonly-gw2_default Creating Stderr: Network roottestrocksdbreadonly-gw2_default Created Stderr: Container roottestrocksdbreadonly-gw2-node-1 Creating Stderr: Container roottestrocksdbreadonly-gw2-node-1 Created Stderr: Container roottestrocksdbreadonly-gw2-node-1 Starting Stderr: Container roottestrocksdbreadonly-gw2-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestrocksdbreadonly-gw2-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestrocksdbreadonly-gw2-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.6.2... http://localhost:None "GET /v1.46/containers/roottestrocksdbreadonly-gw2-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query INSERT INTO test_replicated_merge_tree VALUES (1, 'b') on new_node Executing query CREATE ROLE R1 on instance http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None Stdout:785 http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None Stderr: Network roottestrestartserver-gw5_default Creating Stderr: Network roottestrestartserver-gw5_default Created Stderr: Container roottestrestartserver-gw5-node-1 Creating Stderr: Container roottestrestartserver-gw5-node-1 Created Stderr: Container roottestrestartserver-gw5-node-1 Starting Stderr: Container roottestrestartserver-gw5-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestrestartserver-gw5-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestrestartserver-gw5-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.5.2... http://localhost:None "GET /v1.46/containers/roottestrestartserver-gw5-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None Executing query CREATE ROLE R2 on instance http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None Executing query SYSTEM SYNC REPLICA test_replicated_merge_tree on node Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None Stderr:time="2025-04-02T03:20:17Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottests3accessheaders-gw9_default Creating Stderr: Network roottests3accessheaders-gw9_default Created Stderr: Volume "roottests3accessheaders-gw9_data1-1" Creating Stderr: Volume "roottests3accessheaders-gw9_data1-1" Created Stderr: Container roottests3accessheaders-gw9-proxy2-1 Creating Stderr: Container roottests3accessheaders-gw9-proxy1-1 Creating Stderr: Container roottests3accessheaders-gw9-proxy1-1 Created Stderr: Container roottests3accessheaders-gw9-proxy2-1 Created Stderr: Container roottests3accessheaders-gw9-minio1-1 Creating Stderr: Container roottests3accessheaders-gw9-resolver-1 Creating Stderr: Container roottests3accessheaders-gw9-resolver-1 Created Stderr: Container roottests3accessheaders-gw9-minio1-1 Created Stderr: Container roottests3accessheaders-gw9-proxy2-1 Starting Stderr: Container roottests3accessheaders-gw9-proxy1-1 Starting Stderr: Container roottests3accessheaders-gw9-proxy2-1 Started Stderr: Container roottests3accessheaders-gw9-proxy1-1 Started Stderr: Container roottests3accessheaders-gw9-minio1-1 Starting Stderr: Container roottests3accessheaders-gw9-resolver-1 Starting Stderr: Container roottests3accessheaders-gw9-resolver-1 Started Stderr: Container roottests3accessheaders-gw9-minio1-1 Started Stderr:time="2025-04-02T03:20:18Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:20:18Z" level=debug msg="otel error" error="" Trying to connect to Minio... get_instance_ip instance_name=minio1 http://localhost:None "GET /v1.46/containers/roottests3accessheaders-gw9-minio1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=proxy1 http://localhost:None "GET /v1.46/containers/roottests3accessheaders-gw9-proxy1-1/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.10.5:9001 Incremented Retry for (url='/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (2): 172.16.10.5:9001 Incremented Retry for (url='/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (3): 172.16.10.5:9001 Incremented Retry for (url='/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (4): 172.16.10.5:9001 Can't connect to Minio: HTTPConnectionPool(host='172.16.10.5', port=9001): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Executing query CREATE ROLE R3 on instance http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None Executing query SYSTEM SYNC REPLICA test_replicated_merge_tree on new_node http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Stopping Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Stopping Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Stopped Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None Executing query GRANT R1 TO A on instance Command:[docker compose --env-file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/.env --project-name roottestpostgresqldatabaseengine-gw0 --file /ClickHouse/tests/integration/test_postgresql_database_engine/_instances-0-gw0/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml down --volumes] Executing query SELECT count() FROM test_replicated_merge_tree on node http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None Executing query GRANT R2 TO B on instance Stdout:785 Executing query SELECT count() FROM test_replicated_merge_tree on new_node http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None Executing query SELECT uuid FROM system.tables WHERE name = 'test_replicated_merge_tree' on node http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Stopping Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Stopping Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Stopped Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Removing Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Stopped Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Removing Stderr: Container roottestpostgresqldatabaseengine-gw0-node1-1 Removed Stderr: Container roottestpostgresqldatabaseengine-gw0-postgres1-1 Removed Stderr: Network roottestpostgresqldatabaseengine-gw0_default Removing Stderr: Network roottestpostgresqldatabaseengine-gw0_default Removed Cleanup called Docker networks for project roottestpostgresqldatabaseengine-gw0 are NETWORK ID NAME DRIVER SCOPE http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None Docker containers for project roottestpostgresqldatabaseengine-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestpostgresqldatabaseengine-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestpostgresqldatabaseengine-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] http://localhost:None "GET /v1.46/containers/9446ed37c0a4a7a7a008fcb04628ac4f5fe02292051d09bee78ecc4585ec39af/json HTTP/1.1" 200 None ClickHouse node started Executing query CREATE TABLE test (key UInt64, value String) Engine=EmbeddedRocksDB(0, '/var/lib/clickhouse/store/test_rocksdb_read_only_missing') PRIMARY KEY(key); on node Unstopped containers: {} No running containers for project: roottestpostgresqldatabaseengine-gw0 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Starting new HTTP connection (5): 172.16.10.5:9001 http://172.16.10.5:9001 "GET / HTTP/1.1" 200 0 Connected to Minio. http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Stdout:5 Command:[docker volume prune -f] http://172.16.10.5:9001 "GET /root?location= HTTP/1.1" 404 0 http://172.16.10.5:9001 "PUT /root HTTP/1.1" 200 0 S3 bucket 'root' created http://172.16.10.5:9001 "GET /root2?location= HTTP/1.1" 404 0 http://172.16.10.5:9001 "PUT /root2 HTTP/1.1" 200 0 S3 bucket 'root2' created ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/.env --project-name roottests3accessheaders-gw9 --file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/.env --project-name roottests3accessheaders-gw9 --file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml up -d --no-recreate] Stdout:Total reclaimed space: 0B Volumes pruned: 5 test_prometheus_endpoint/test.py::test_prometheus_endpoint Running tests in /ClickHouse/tests/integration/test_prometheus_endpoint/test.py Cluster start called. is_up=False Docker networks for project roottestprometheusendpoint-gw0 are NETWORK ID NAME DRIVER SCOPE Executing query SELECT remote_path FROM system.remote_data_paths WHERE local_path LIKE '%15e471bc-040f-4893-a06d-b4a6224ae04b%' AND local_path NOT LIKE '%format_version.txt%' ORDER BY ALL on node Docker containers for project roottestprometheusendpoint-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestprometheusendpoint-gw0 are DRIVER VOLUME NAME Cleanup called http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None Docker networks for project roottestprometheusendpoint-gw0 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestprometheusendpoint-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestprometheusendpoint-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestprometheusendpoint-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestprometheusendpoint-gw0 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrocksdbreadonly-gw2-node-1 bash -c ps -C clickhouse] http://localhost:None "GET /v1.46/containers/418c71f1391e03800bb01b4ea29883e0099b418ffd97700a81b9c062786f441d/json HTTP/1.1" 200 None ClickHouse node started Executing query CREATE DATABASE test ENGINE Memory on node Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 5 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_prometheus_endpoint/configs/prom_conf.xml'] to /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/database Setup logs dir /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrocksdbreadonly-gw2-node-1 bash -c pkill clickhouse] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/.env --project-name roottestprometheusendpoint-gw0 --file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/docker-compose.yml pull] run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query SELECT uuid FROM system.tables WHERE name = 'test_replicated_merge_tree' on new_node Executing query CREATE TABLE test.test_table(a String) ENGINE Memory on node Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Stderr: Container roottests3accessheaders-gw9-proxy2-1 Running Stderr: Container roottests3accessheaders-gw9-proxy1-1 Running Stderr: Container roottests3accessheaders-gw9-minio1-1 Running Stderr: Container roottests3accessheaders-gw9-node1-1 Creating Stderr: Container roottests3accessheaders-gw9-resolver-1 Running Stderr: Container roottests3accessheaders-gw9-node1-1 Created Stderr: Container roottests3accessheaders-gw9-node1-1 Starting Stderr: Container roottests3accessheaders-gw9-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottests3accessheaders-gw9-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottests3accessheaders-gw9-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.10.6... http://localhost:None "GET /v1.46/containers/roottests3accessheaders-gw9-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None Stdout:785 Executing query SELECT remote_path FROM system.remote_data_paths WHERE local_path LIKE '%412b2985-ebfe-427a-9b1e-5063ddbe6d19%' AND local_path NOT LIKE '%format_version.txt%' ORDER BY ALL on new_node Executing query GRANT R3 TO R2 on instance Executing query DROP DATABASE test on node http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None Executing query GRANT SELECT(x8) ON tbl TO R1 on instance http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrestartserver-gw5-node-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill -9 clickhouse'] Command:[docker exec -u root roottestrestartserver-gw5-node-1 bash -c pkill -9 clickhouse] Executing query SELECT name FROM system.parts WHERE table = 'test_replicated_merge_tree' AND active ORDER BY ALL on node http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Stdout:8 http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query SELECT value FROM system.zookeeper WHERE path='/clickhouse/tables/test_replicated_merge_tree_s3_template_key_zero_copy' and name='table_shared_id' on node http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None Executing query SELECT name FROM system.zookeeper WHERE path='/clickhouse/zero_copy/zero_copy_s3/15e471bc-040f-4893-a06d-b4a6224ae04b/0_0_0_0' ORDER BY ALL on node run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None Stdout:8 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None Executing query SELECT name FROM system.zookeeper WHERE path='/clickhouse/zero_copy/zero_copy_s3/15e471bc-040f-4893-a06d-b4a6224ae04b/0_0_0_0/old-style-prefix_with-several-section_pek_ujidnipyaknnkbdhpppjkjfgankxl' ORDER BY ALL on node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stdout:785 http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None Executing query SELECT name FROM system.zookeeper WHERE path='/clickhouse/zero_copy/zero_copy_s3/15e471bc-040f-4893-a06d-b4a6224ae04b/1_0_0_0' ORDER BY ALL on node Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/671ab3eb465f3d692f5407b284e5088ba1da0b0271993dd646222e660ca57d5c/json HTTP/1.1" 200 None ClickHouse node1 started http://172.16.10.5:9001 "PUT /root?policy= HTTP/1.1" 204 0 http://172.16.10.5:9001 "GET /root-with-auth?location= HTTP/1.1" 404 0 http://172.16.10.5:9001 "PUT /root-with-auth HTTP/1.1" 200 0 S3 bucket created Starting mock server mocker_s3.py run container_id:roottests3accessheaders-gw9-resolver-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname mocker_s3.py) && echo aW1wb3J0IGh0dHAuY2xpZW50CmltcG9ydCBodHRwLnNlcnZlcgppbXBvcnQgcmFuZG9tCmltcG9ydCBzb2NrZXRzZXJ2ZXIKaW1wb3J0IHN5cwppbXBvcnQgdXJsbGliLnBhcnNlCgpVUFNUUkVBTV9IT1NUID0gIm1pbmlvMTo5MDAxIgpyYW5kb20uc2VlZCgiTm8gbGlzdCBvYmplY3RzLzEuMCIpCgoKZGVmIHJlcXVlc3QoY29tbWFuZCwgdXJsLCBoZWFkZXJzPXt9LCBkYXRhPU5vbmUpOgogICAgIiIiTWluaS1yZXF1ZXN0cy4iIiIKCiAgICBjbGFzcyBEdW1teToKICAgICAgICBwYXNzCgogICAgcGFydHMgPSB1cmxsaWIucGFyc2UudXJscGFyc2UodXJsKQogICAgYyA9IGh0dHAuY2xpZW50LkhUVFBDb25uZWN0aW9uKHBhcnRzLmhvc3RuYW1lLCBwYXJ0cy5wb3J0KQogICAgYy5yZXF1ZXN0KAogICAgICAgIGNvbW1hbmQsCiAgICAgICAgdXJsbGliLnBhcnNlLnVybHVucGFyc2UocGFydHMuX3JlcGxhY2Uoc2NoZW1lPSIiLCBuZXRsb2M9IiIpKSwKICAgICAgICBoZWFkZXJzPWhlYWRlcnMsCiAgICAgICAgYm9keT1kYXRhLAogICAgKQogICAgciA9IGMuZ2V0cmVzcG9uc2UoKQogICAgcmVzdWx0ID0gRHVtbXkoKQogICAgcmVzdWx0LnN0YXR1c19jb2RlID0gci5zdGF0dXMKICAgIHJlc3VsdC5oZWFkZXJzID0gci5oZWFkZXJzCiAgICByZXN1bHQuY29udGVudCA9IHIucmVhZCgpCiAgICByZXR1cm4gcmVzdWx0CgoKQ1VTVE9NX0FVVEhfVE9LRU5fSEVBREVSID0gImN1c3RvbS1hdXRoLXRva2VuIgpDVVNUT01fQVVUSF9UT0tFTl9WQUxJRF9WQUxVRSA9ICJWYWxpZFRva2VuMTIzNCIKCgpjbGFzcyBSZXF1ZXN0SGFuZGxlcihodHRwLnNlcnZlci5CYXNlSFRUUFJlcXVlc3RIYW5kbGVyKToKICAgIGRlZiBkb19HRVQoc2VsZik6CiAgICAgICAgaWYgc2VsZi5wYXRoID09ICIvIjoKICAgICAgICAgICAgc2VsZi5zZW5kX3Jlc3BvbnNlKDIwMCkKICAgICAgICAgICAgc2VsZi5zZW5kX2hlYWRlcigiQ29udGVudC1UeXBlIiwgInRleHQvcGxhaW4iKQogICAgICAgICAgICBzZWxmLmVuZF9oZWFkZXJzKCkKICAgICAgICAgICAgc2VsZi53ZmlsZS53cml0ZShiIk9LIikKICAgICAgICAgICAgcmV0dXJuCiAgICAgICAgc2VsZi5kb19IRUFEKCkKCiAgICBkZWYgZG9fUFVUKHNlbGYpOgogICAgICAgIHNlbGYuZG9fSEVBRCgpCgogICAgZGVmIGRvX0RFTEVURShzZWxmKToKICAgICAgICBzZWxmLmRvX0hFQUQoKQoKICAgIGRlZiBkb19QT1NUKHNlbGYpOgogICAgICAgIHNlbGYuZG9fSEVBRCgpCgogICAgZGVmIGRvX0hFQUQoc2VsZik6CgogICAgICAgIGN1c3RvbV9hdXRoX3Rva2VuID0gc2VsZi5oZWFkZXJzLmdldChDVVNUT01fQVVUSF9UT0tFTl9IRUFERVIpCiAgICAgICAgaWYgY3VzdG9tX2F1dGhfdG9rZW4gYW5kIGN1c3RvbV9hdXRoX3Rva2VuICE9IENVU1RPTV9BVVRIX1RPS0VOX1ZBTElEX1ZBTFVFOgogICAgICAgICAgICBzZWxmLnNlbmRfcmVzcG9uc2UoNDAzKQogICAgICAgICAgICBzZWxmLnNlbmRfaGVhZGVyKCJDb250ZW50LVR5cGUiLCAiYXBwbGljYXRpb24veG1sIikKICAgICAgICAgICAgc2VsZi5lbmRfaGVhZGVycygpCgogICAgICAgICAgICBib2R5ID0gZiIiIjw/eG1sIHZlcnNpb249IjEuMCIgZW5jb2Rpbmc9IlVURi04Ij8+CjxFcnJvcj4KICAgIDxDb2RlPkFjY2Vzc0RlbmllZDwvQ29kZT4KICAgIDxNZXNzYWdlPkFjY2VzcyBEZW5pZWQuIEN1c3RvbSB0b2tlbiB3YXMge2N1c3RvbV9hdXRoX3Rva2VufSwgdGhlIGNvcnJlY3Qgb25lOiB7Q1VTVE9NX0FVVEhfVE9LRU5fVkFMSURfVkFMVUV9LjwvTWVzc2FnZT4KICAgIDxSZXNvdXJjZT5SRVNPVVJDRTwvUmVzb3VyY2U+CiAgICA8UmVxdWVzdElkPlJFUVVFU1RfSUQ8L1JlcXVlc3RJZD4KPC9FcnJvcj4KIiIiCiAgICAgICAgICAgIHNlbGYud2ZpbGUud3JpdGUoYm9keS5lbmNvZGUoKSkKICAgICAgICAgICAgcmV0dXJuCgogICAgICAgIGNvbnRlbnRfbGVuZ3RoID0gc2VsZi5oZWFkZXJzLmdldCgiQ29udGVudC1MZW5ndGgiKQogICAgICAgIGRhdGEgPSBzZWxmLnJmaWxlLnJlYWQoaW50KGNvbnRlbnRfbGVuZ3RoKSkgaWYgY29udGVudF9sZW5ndGggZWxzZSBOb25lCiAgICAgICAgciA9IHJlcXVlc3QoCiAgICAgICAgICAgIHNlbGYuY29tbWFuZCwKICAgICAgICAgICAgZiJodHRwOi8ve1VQU1RSRUFNX0hPU1R9e3NlbGYucGF0aH0iLAogICAgICAgICAgICBoZWFkZXJzPXNlbGYuaGVhZGVycywKICAgICAgICAgICAgZGF0YT1kYXRhLAogICAgICAgICkKICAgICAgICBzZWxmLnNlbmRfcmVzcG9uc2Uoci5zdGF0dXNfY29kZSkKICAgICAgICBmb3IgaywgdiBpbiByLmhlYWRlcnMuaXRlbXMoKToKICAgICAgICAgICAgc2VsZi5zZW5kX2hlYWRlcihrLCB2KQogICAgICAgIHNlbGYuZW5kX2hlYWRlcnMoKQogICAgICAgIHNlbGYud2ZpbGUud3JpdGUoci5jb250ZW50KQogICAgICAgIHNlbGYud2ZpbGUuY2xvc2UoKQoKCmNsYXNzIFRocmVhZGVkSFRUUFNlcnZlcihzb2NrZXRzZXJ2ZXIuVGhyZWFkaW5nTWl4SW4sIGh0dHAuc2VydmVyLkhUVFBTZXJ2ZXIpOgogICAgIiIiSGFuZGxlIHJlcXVlc3RzIGluIGEgc2VwYXJhdGUgdGhyZWFkLiIiIgoKCmh0dHBkID0gVGhyZWFkZWRIVFRQU2VydmVyKCgiMC4wLjAuMCIsIGludChzeXMuYXJndlsxXSkpLCBSZXF1ZXN0SGFuZGxlcikKaHR0cGQuc2VydmVfZm9yZXZlcigpCg== | base64 --decode > mocker_s3.py'] Command:[docker exec roottests3accessheaders-gw9-resolver-1 bash -c mkdir -p $(dirname mocker_s3.py) && echo aW1wb3J0IGh0dHAuY2xpZW50CmltcG9ydCBodHRwLnNlcnZlcgppbXBvcnQgcmFuZG9tCmltcG9ydCBzb2NrZXRzZXJ2ZXIKaW1wb3J0IHN5cwppbXBvcnQgdXJsbGliLnBhcnNlCgpVUFNUUkVBTV9IT1NUID0gIm1pbmlvMTo5MDAxIgpyYW5kb20uc2VlZCgiTm8gbGlzdCBvYmplY3RzLzEuMCIpCgoKZGVmIHJlcXVlc3QoY29tbWFuZCwgdXJsLCBoZWFkZXJzPXt9LCBkYXRhPU5vbmUpOgogICAgIiIiTWluaS1yZXF1ZXN0cy4iIiIKCiAgICBjbGFzcyBEdW1teToKICAgICAgICBwYXNzCgogICAgcGFydHMgPSB1cmxsaWIucGFyc2UudXJscGFyc2UodXJsKQogICAgYyA9IGh0dHAuY2xpZW50LkhUVFBDb25uZWN0aW9uKHBhcnRzLmhvc3RuYW1lLCBwYXJ0cy5wb3J0KQogICAgYy5yZXF1ZXN0KAogICAgICAgIGNvbW1hbmQsCiAgICAgICAgdXJsbGliLnBhcnNlLnVybHVucGFyc2UocGFydHMuX3JlcGxhY2Uoc2NoZW1lPSIiLCBuZXRsb2M9IiIpKSwKICAgICAgICBoZWFkZXJzPWhlYWRlcnMsCiAgICAgICAgYm9keT1kYXRhLAogICAgKQogICAgciA9IGMuZ2V0cmVzcG9uc2UoKQogICAgcmVzdWx0ID0gRHVtbXkoKQogICAgcmVzdWx0LnN0YXR1c19jb2RlID0gci5zdGF0dXMKICAgIHJlc3VsdC5oZWFkZXJzID0gci5oZWFkZXJzCiAgICByZXN1bHQuY29udGVudCA9IHIucmVhZCgpCiAgICByZXR1cm4gcmVzdWx0CgoKQ1VTVE9NX0FVVEhfVE9LRU5fSEVBREVSID0gImN1c3RvbS1hdXRoLXRva2VuIgpDVVNUT01fQVVUSF9UT0tFTl9WQUxJRF9WQUxVRSA9ICJWYWxpZFRva2VuMTIzNCIKCgpjbGFzcyBSZXF1ZXN0SGFuZGxlcihodHRwLnNlcnZlci5CYXNlSFRUUFJlcXVlc3RIYW5kbGVyKToKICAgIGRlZiBkb19HRVQoc2VsZik6CiAgICAgICAgaWYgc2VsZi5wYXRoID09ICIvIjoKICAgICAgICAgICAgc2VsZi5zZW5kX3Jlc3BvbnNlKDIwMCkKICAgICAgICAgICAgc2VsZi5zZW5kX2hlYWRlcigiQ29udGVudC1UeXBlIiwgInRleHQvcGxhaW4iKQogICAgICAgICAgICBzZWxmLmVuZF9oZWFkZXJzKCkKICAgICAgICAgICAgc2VsZi53ZmlsZS53cml0ZShiIk9LIikKICAgICAgICAgICAgcmV0dXJuCiAgICAgICAgc2VsZi5kb19IRUFEKCkKCiAgICBkZWYgZG9fUFVUKHNlbGYpOgogICAgICAgIHNlbGYuZG9fSEVBRCgpCgogICAgZGVmIGRvX0RFTEVURShzZWxmKToKICAgICAgICBzZWxmLmRvX0hFQUQoKQoKICAgIGRlZiBkb19QT1NUKHNlbGYpOgogICAgICAgIHNlbGYuZG9fSEVBRCgpCgogICAgZGVmIGRvX0hFQUQoc2VsZik6CgogICAgICAgIGN1c3RvbV9hdXRoX3Rva2VuID0gc2VsZi5oZWFkZXJzLmdldChDVVNUT01fQVVUSF9UT0tFTl9IRUFERVIpCiAgICAgICAgaWYgY3VzdG9tX2F1dGhfdG9rZW4gYW5kIGN1c3RvbV9hdXRoX3Rva2VuICE9IENVU1RPTV9BVVRIX1RPS0VOX1ZBTElEX1ZBTFVFOgogICAgICAgICAgICBzZWxmLnNlbmRfcmVzcG9uc2UoNDAzKQogICAgICAgICAgICBzZWxmLnNlbmRfaGVhZGVyKCJDb250ZW50LVR5cGUiLCAiYXBwbGljYXRpb24veG1sIikKICAgICAgICAgICAgc2VsZi5lbmRfaGVhZGVycygpCgogICAgICAgICAgICBib2R5ID0gZiIiIjw/eG1sIHZlcnNpb249IjEuMCIgZW5jb2Rpbmc9IlVURi04Ij8+CjxFcnJvcj4KICAgIDxDb2RlPkFjY2Vzc0RlbmllZDwvQ29kZT4KICAgIDxNZXNzYWdlPkFjY2VzcyBEZW5pZWQuIEN1c3RvbSB0b2tlbiB3YXMge2N1c3RvbV9hdXRoX3Rva2VufSwgdGhlIGNvcnJlY3Qgb25lOiB7Q1VTVE9NX0FVVEhfVE9LRU5fVkFMSURfVkFMVUV9LjwvTWVzc2FnZT4KICAgIDxSZXNvdXJjZT5SRVNPVVJDRTwvUmVzb3VyY2U+CiAgICA8UmVxdWVzdElkPlJFUVVFU1RfSUQ8L1JlcXVlc3RJZD4KPC9FcnJvcj4KIiIiCiAgICAgICAgICAgIHNlbGYud2ZpbGUud3JpdGUoYm9keS5lbmNvZGUoKSkKICAgICAgICAgICAgcmV0dXJuCgogICAgICAgIGNvbnRlbnRfbGVuZ3RoID0gc2VsZi5oZWFkZXJzLmdldCgiQ29udGVudC1MZW5ndGgiKQogICAgICAgIGRhdGEgPSBzZWxmLnJmaWxlLnJlYWQoaW50KGNvbnRlbnRfbGVuZ3RoKSkgaWYgY29udGVudF9sZW5ndGggZWxzZSBOb25lCiAgICAgICAgciA9IHJlcXVlc3QoCiAgICAgICAgICAgIHNlbGYuY29tbWFuZCwKICAgICAgICAgICAgZiJodHRwOi8ve1VQU1RSRUFNX0hPU1R9e3NlbGYucGF0aH0iLAogICAgICAgICAgICBoZWFkZXJzPXNlbGYuaGVhZGVycywKICAgICAgICAgICAgZGF0YT1kYXRhLAogICAgICAgICkKICAgICAgICBzZWxmLnNlbmRfcmVzcG9uc2Uoci5zdGF0dXNfY29kZSkKICAgICAgICBmb3IgaywgdiBpbiByLmhlYWRlcnMuaXRlbXMoKToKICAgICAgICAgICAgc2VsZi5zZW5kX2hlYWRlcihrLCB2KQogICAgICAgIHNlbGYuZW5kX2hlYWRlcnMoKQogICAgICAgIHNlbGYud2ZpbGUud3JpdGUoci5jb250ZW50KQogICAgICAgIHNlbGYud2ZpbGUuY2xvc2UoKQoKCmNsYXNzIFRocmVhZGVkSFRUUFNlcnZlcihzb2NrZXRzZXJ2ZXIuVGhyZWFkaW5nTWl4SW4sIGh0dHAuc2VydmVyLkhUVFBTZXJ2ZXIpOgogICAgIiIiSGFuZGxlIHJlcXVlc3RzIGluIGEgc2VwYXJhdGUgdGhyZWFkLiIiIgoKCmh0dHBkID0gVGhyZWFkZWRIVFRQU2VydmVyKCgiMC4wLjAuMCIsIGludChzeXMuYXJndlsxXSkpLCBSZXF1ZXN0SGFuZGxlcikKaHR0cGQuc2VydmVfZm9yZXZlcigpCg== | base64 --decode > mocker_s3.py] Executing query SELECT name FROM system.parts where name = 'all_1_1_4' and table = 'table_for_recompression' on node2 Executing query SELECT name FROM system.zookeeper WHERE path='/clickhouse/zero_copy/zero_copy_s3/15e471bc-040f-4893-a06d-b4a6224ae04b/1_0_0_0/uid-first-random-part_new-style-prefix_constant-part_ick_lvrqoegfgyvhzvepinxoloshzrgjq' ORDER BY ALL on node run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance run container_id:roottests3accessheaders-gw9-resolver-1 detach:True nothrow:False cmd: ['bash', '-c', 'python3 mocker_s3.py 8081 >/var/log/resolver/mocker_s3.log 2>/var/log/resolver/mocker_s3.err.log'] Command:[docker exec roottests3accessheaders-gw9-resolver-1 bash -c python3 mocker_s3.py 8081 >/var/log/resolver/mocker_s3.log 2>/var/log/resolver/mocker_s3.err.log] run container_id:roottests3accessheaders-gw9-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8081/'] Command:[docker exec roottests3accessheaders-gw9-resolver-1 curl -s http://localhost:8081/] run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrestartserver-gw5-node-1/exec HTTP/1.1" 201 74 Exitcode:7 http://localhost:None "POST /v1.46/exec/e2e3a2828d5cbd36889f34edf8ce92f58a94b9d1356d02fd56e401cd846e37d9/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/e2e3a2828d5cbd36889f34edf8ce92f58a94b9d1356d02fd56e401cd846e37d9/json HTTP/1.1" 200 586 Executing query DROP TABLE IF EXISTS test_replicated_merge_tree SYNC on node Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query DROP TABLE IF EXISTS test_replicated_merge_tree SYNC on new_node Stdout:8 Executing query GRANT R3 TO C on instance Executing query OPTIMIZE TABLE table_for_recompression FINAL on node2 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Executing query GRANT SELECT(x5) ON tbl TO R3 on instance Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:785 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused [gw3] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4] Executing query CREATE TABLE test_read_new_format ( id Int64, data String ) ENGINE=MergeTree() ORDER BY id on new_node test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format Executing query SELECT default_compression_codec FROM system.parts where name = 'all_1_1_4' on node2 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query INSERT INTO test_read_new_format VALUES (1, 'Hello') on new_node Executing query SELECT recompression_ttl_info.expression FROM system.parts where name = 'all_1_1_4' on node2 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance run container_id:roottests3accessheaders-gw9-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8081/'] Command:[docker exec roottests3accessheaders-gw9-resolver-1 curl -s http://localhost:8081/] run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:OK mocker_s3.py answered OK on attempt 2 Mock server mocker_s3.py started Stdout:727 Clickhouse process running. run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SET s3_truncate_on_insert=1; INSERT INTO FUNCTION s3('http://minio1:9001/root/test_static_override.csv', 'minio', 'minio123','CSV') SELECT number as a, toString(number) as b FROM numbers(3); on node1 Stdout:727 Executing query select 20 on node [gw6] PASSED test_recompression_ttl/test.py::test_recompression_multiple_ttls Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance test_recompression_ttl/test.py::test_recompression_replicated Executing query CREATE TABLE recompression_replicated (d DateTime, key UInt64, data String) ENGINE ReplicatedMergeTree('/test/rr', '1') ORDER BY tuple() TTL d + INTERVAL 10 SECOND RECOMPRESS CODEC(ZSTD(13)) SETTINGS merge_with_recompression_ttl_timeout = 0 on node1 Executing query SELECT name FROM system.parts WHERE table = 'test_read_new_format' and active LIMIT 1 on new_node run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'rm -r /var/lib/clickhouse/store/test_rocksdb_read_only_missing'] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c rm -r /var/lib/clickhouse/store/test_rocksdb_read_only_missing] Executing query DROP TABLE IF EXISTS test_static_override; CREATE TABLE test_static_override (name String, value UInt32) ENGINE=S3('http://resolver:8081/root/test_static_override.csv', 'minio', 'minio123', 'CSV'); on node1 run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrocksdbreadonly-gw2-node-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/c649db5816ef1b585651274bfe323c492e2296d9089e7db9f6e13cec9bb73318/start HTTP/1.1" 200 0 Executing query CREATE TABLE recompression_replicated (d DateTime, key UInt64, data String) ENGINE ReplicatedMergeTree('/test/rr', '2') ORDER BY tuple() TTL d + INTERVAL 10 SECOND RECOMPRESS CODEC(ZSTD(13)) SETTINGS merge_with_recompression_ttl_timeout = 0 on node2 http://localhost:None "GET /v1.46/exec/c649db5816ef1b585651274bfe323c492e2296d9089e7db9f6e13cec9bb73318/json HTTP/1.1" 200 586 Executing query SELECT path FROM system.parts WHERE table = 'test_read_new_format' and name = 'all_1_1_0' on new_node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:785 Executing query SYSTEM DROP QUERY CACHE on node1 Executing query SELECT remote_path FROM system.remote_data_paths WHERE concat(path, local_path) = '/var/lib/clickhouse/disks/s3/store/80b/80be02d0-f8ca-4bbe-a817-fe48a347dcb8/all_1_1_0/primary.cidx' on new_node Executing query INSERT INTO recompression_replicated VALUES (now(), 1, '1') on node1 Executing query select 20 on node Executing query SELECT count(*) FROM test_static_override on node1 Executing query SYSTEM SYNC REPLICA recompression_replicated on node2 Executing query ALTER TABLE test_read_new_format DETACH PART 'all_1_1_0' on new_node Executing query SHOW DATABASES LIKE 'test' on node run container_id:roottests3accessheaders-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "sed -i 's/custom-auth-token: ValidToken1234/custom-auth-token: InvalidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml"] Command:[docker exec roottests3accessheaders-gw9-node1-1 bash -c sed -i 's/custom-auth-token: ValidToken1234/custom-auth-token: InvalidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml] Executing query SYSTEM RELOAD CONFIG on node1 Executing query SELECT default_compression_codec FROM system.parts where name = 'all_0_0_0' and table = 'recompression_replicated' on node1 Executing query CREATE TABLE flush_test (a String, b UInt64) ENGINE = MergeTree ORDER BY a; SET async_insert = 1; SET wait_for_async_insert = 0; SET async_insert_busy_timeout_ms = 1000000; INSERT INTO flush_test VALUES ('world', 23456); on node [gw5] PASSED test_restart_server/test.py::test_drop_memory_database test_restart_server/test.py::test_flushes_async_insert_queue Executing query GRANT SELECT(x7) ON tbl TO R3 on instance run container_id:roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 detach:False nothrow:False cmd: ['bash', '-c', 'cat /var/lib/clickhouse/disks/s3/store/80b/80be02d0-f8ca-4bbe-a817-fe48a347dcb8/detached/all_1_1_0/primary.cidx'] Command:[docker exec roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 bash -c cat /var/lib/clickhouse/disks/s3/store/80b/80be02d0-f8ca-4bbe-a817-fe48a347dcb8/detached/all_1_1_0/primary.cidx] Stdout:5 Stdout:1 50 Stdout:50 old-style-prefix/with-several-section/elr/mygdfdmopfhnkfhrzuuduwnrhozaw Stdout:0 Stdout:1 Stdout: [gw3] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format Command:[docker compose --env-file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env --project-name roottestremoteblobsnamingbackwardcompatibility-gw3 --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/docker-compose.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/docker-compose.yml stop --timeout 20] Executing query SELECT count(*) FROM test_static_override on node1 Executing query SELECT default_compression_codec FROM system.parts where name = 'all_0_0_0' and table = 'recompression_replicated' on node2 run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrestartserver-gw5-node-1 bash -c ps -C clickhouse] Executing query GRANT SELECT(x2) ON tbl TO R2 on instance run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout: PID TTY TIME CMD Stdout: 727 ? 00:00:01 clickhouse run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrestartserver-gw5-node-1 bash -c pkill clickhouse] Stdout:781 Clickhouse process running. run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:781 Executing query select 20 on node Stdout:727 http://localhost:None "GET /v1.46/exec/7ee379e6761ed7455f95561b3bb369251e7ad5e45d804d91dbd388b631afe33d/json HTTP/1.1" 200 584 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottests3accessheaders-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "sed -i 's/custom-auth-token: InvalidToken1234/custom-auth-token: ValidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml"] Command:[docker exec roottests3accessheaders-gw9-node1-1 bash -c sed -i 's/custom-auth-token: InvalidToken1234/custom-auth-token: ValidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml] Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 Executing query SYSTEM RELOAD CONFIG on node1 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance http://localhost:None "POST /v1.46/exec/6b7b0de35a6171e295875cbc16879521c20afa058b65b7fe3e89d56b73f0d90b/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/6b7b0de35a6171e295875cbc16879521c20afa058b65b7fe3e89d56b73f0d90b/json HTTP/1.1" 200 586 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connection dropped: outstanding heartbeat ping not received Transition to CONNECTING Zookeeper connection lost Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query SELECT count(*) FROM test_static_override on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query select 20 on node Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False [gw9] PASSED test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header] Executing query SET s3_truncate_on_insert=1; INSERT INTO FUNCTION s3('http://minio1:9001/root/test_access_header.csv', 'minio', 'minio123','CSV') SELECT number as a, toString(number) as b FROM numbers(3); on node1 test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query DROP TABLE IF EXISTS test_access_header; CREATE TABLE test_access_header (name String, value UInt32) ENGINE=S3('http://resolver:8081/root/test_access_header.csv', 'CSV'); on node1 Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query INSERT INTO test (key, value) VALUES (0, 'a'); SELECT * FROM test; on node Executing query DROP ROLE R3 on instance run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:727 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SYSTEM DROP QUERY CACHE on node1 Stdout:1659 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query GRANT SELECT(x7) ON tbl TO R2 on instance Stdout:1659 Executing query select 20 on node1 Executing query DROP TABLE test; on node Executing query SELECT count(*) FROM test_access_header on node1 Executing query CREATE TABLE test (key UInt64, value String) Engine=EmbeddedRocksDB(0, '/var/lib/clickhouse/store/test_rocksdb_read_only_missing', 1) PRIMARY KEY(key); on node Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance run container_id:roottests3accessheaders-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "sed -i 's/custom-auth-token: ValidToken1234/custom-auth-token: InvalidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml"] Command:[docker exec roottests3accessheaders-gw9-node1-1 bash -c sed -i 's/custom-auth-token: ValidToken1234/custom-auth-token: InvalidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrocksdbreadonly-gw2-node-1 bash -c ps -C clickhouse] Executing query SYSTEM RELOAD CONFIG on node1 Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Stdout: PID TTY TIME CMD Stdout: 781 ? 00:00:01 clickhouse run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrocksdbreadonly-gw2-node-1 bash -c pkill clickhouse] run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:781 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query select 20 on node1 Executing query SELECT count(*) FROM test_access_header on node1 run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:727 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Executing query GRANT SELECT(x10) ON tbl TO R1 on instance Stdout: PID TTY TIME CMD Stdout: 1659 ? 00:00:02 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottests3accessheaders-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "sed -i 's/custom-auth-token: InvalidToken1234/custom-auth-token: ValidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml"] Command:[docker exec roottests3accessheaders-gw9-node1-1 bash -c sed -i 's/custom-auth-token: InvalidToken1234/custom-auth-token: ValidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SYSTEM RELOAD CONFIG on node1 Stdout:1659 Executing query CREATE ROLE R3 on instance Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query SELECT count(*) FROM test_access_header on node1 Executing query GRANT R3 TO R2 on instance Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance [gw9] PASSED test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header] Executing query SET s3_truncate_on_insert=1; INSERT INTO FUNCTION s3('http://minio1:9001/root/test_named_colections.csv', 'minio', 'minio123','CSV') SELECT number as a, toString(number) as b FROM numbers(3); on node1 test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header] run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:781 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query DROP TABLE IF EXISTS test_named_colections; CREATE TABLE test_named_colections (name String, value UInt32) ENGINE=S3(s3_mock, format='CSV'); on node1 run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:727 Stdout:1474 Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query SYSTEM DROP QUERY CACHE on node1 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1659 Executing query SELECT count(*) FROM test_named_colections on node1 Executing query GRANT SELECT(x6) ON tbl TO R1 on instance run container_id:roottests3accessheaders-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "sed -i 's/custom-auth-token: ValidToken1234/custom-auth-token: InvalidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml"] Command:[docker exec roottests3accessheaders-gw9-node1-1 bash -c sed -i 's/custom-auth-token: ValidToken1234/custom-auth-token: InvalidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml] Executing query GRANT R3 TO C on instance Executing query SYSTEM RELOAD CONFIG on node1 run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Stdout:781 Executing query SELECT count(*) FROM test_named_colections on node1 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance run container_id:roottests3accessheaders-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "sed -i 's/custom-auth-token: InvalidToken1234/custom-auth-token: ValidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml"] Command:[docker exec roottests3accessheaders-gw9-node1-1 bash -c sed -i 's/custom-auth-token: InvalidToken1234/custom-auth-token: ValidToken1234/g' /etc/clickhouse-server/config.d/s3_headers.xml] run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Connection dropped: socket connection error: No route to host Connection dropped: socket connection error: No route to host run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SYSTEM RELOAD CONFIG on node1 No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrestartserver-gw5-node-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/9803145dece2faf856c26a7b0e1449be3793d47f97bfd36e0fca095a431646a6/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/9803145dece2faf856c26a7b0e1449be3793d47f97bfd36e0fca095a431646a6/json HTTP/1.1" 200 586 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance http://localhost:None "GET /v1.46/exec/6b7b0de35a6171e295875cbc16879521c20afa058b65b7fe3e89d56b73f0d90b/json HTTP/1.1" 200 584 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT count(*) FROM test_named_colections on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/8feb067570f7659941c73d9358204b1718147108329a52323945508e695a1c00/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/8feb067570f7659941c73d9358204b1718147108329a52323945508e695a1c00/json HTTP/1.1" 200 586 Executing query DROP ROLE R1 on instance Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/.env --project-name roottests3accessheaders-gw9 --file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml stop --timeout 20] [gw9] PASSED test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header] Executing query DROP ROLE R2 on instance run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:781 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1511 Clickhouse process running. run container_id:roottestrestartserver-gw5-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrestartserver-gw5-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1511 Executing query select 20 on node Stderr: Container roottestrestorereplica-gw8-replica1-1 Stopping Stderr: Container roottestrestorereplica-gw8-replica2-1 Stopping Stderr: Container roottestrestorereplica-gw8-replica3-1 Stopping Stderr: Container roottestrestorereplica-gw8-replica3-1 Stopped Stderr: Container roottestrestorereplica-gw8-replica2-1 Stopped Stderr: Container roottestrestorereplica-gw8-replica1-1 Stopped Stderr: Container roottestrestorereplica-gw8-zoo3-1 Stopping Stderr: Container roottestrestorereplica-gw8-zoo1-1 Stopping Stderr: Container roottestrestorereplica-gw8-zoo2-1 Stopping Stderr: Container roottestrestorereplica-gw8-zoo3-1 Stopped Stderr: Container roottestrestorereplica-gw8-zoo2-1 Stopped Stderr: Container roottestrestorereplica-gw8-zoo1-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Command:[bash -c [ -f /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/.env --project-name roottestrestorereplica-gw8 --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica2/docker-compose.yml --file /ClickHouse/tests/integration/test_restore_replica/_instances-0-gw8/replica3/docker-compose.yml down --volumes] Stdout:2432 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Stdout:2432 Executing query select 20 on node1 Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/.env --project-name roottestprometheusendpoint-gw0 --file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/.env --project-name roottestprometheusendpoint-gw0 --file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/docker-compose.yml up -d --no-recreate] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:781 Stdout:1579 Executing query select 20 on node Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Executing query GRANT SELECT(x6) ON tbl TO R3 on instance Executing query select 20 on node1 run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'rm -r /var/lib/clickhouse/store/test_rocksdb_read_only_missing'] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c rm -r /var/lib/clickhouse/store/test_rocksdb_read_only_missing] run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrocksdbreadonly-gw2-node-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/68ccf7f7ef5477976535d7f920bf72a3db455f534610aa362731b7ea9cbbb75a/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/68ccf7f7ef5477976535d7f920bf72a3db455f534610aa362731b7ea9cbbb75a/json HTTP/1.1" 200 586 Executing query SELECT * FROM flush_test on node Stderr: Container roottestrestorereplica-gw8-replica1-1 Stopping Stderr: Container roottestrestorereplica-gw8-replica3-1 Stopping Stderr: Container roottestrestorereplica-gw8-replica2-1 Stopping Stderr: Container roottestrestorereplica-gw8-replica1-1 Stopped Stderr: Container roottestrestorereplica-gw8-replica1-1 Removing Stderr: Container roottestrestorereplica-gw8-replica2-1 Stopped Stderr: Container roottestrestorereplica-gw8-replica2-1 Removing Stderr: Container roottestrestorereplica-gw8-replica3-1 Stopped Stderr: Container roottestrestorereplica-gw8-replica3-1 Removing Stderr: Container roottestrestorereplica-gw8-replica1-1 Removed Stderr: Container roottestrestorereplica-gw8-replica3-1 Removed Stderr: Container roottestrestorereplica-gw8-replica2-1 Removed Stderr: Container roottestrestorereplica-gw8-zoo1-1 Stopping Stderr: Container roottestrestorereplica-gw8-zoo2-1 Stopping Stderr: Container roottestrestorereplica-gw8-zoo3-1 Stopping Stderr: Container roottestrestorereplica-gw8-zoo2-1 Stopped Stderr: Container roottestrestorereplica-gw8-zoo2-1 Removing Stderr: Container roottestrestorereplica-gw8-zoo3-1 Stopped Stderr: Container roottestrestorereplica-gw8-zoo3-1 Removing Stderr: Container roottestrestorereplica-gw8-zoo1-1 Stopped Stderr: Container roottestrestorereplica-gw8-zoo1-1 Removing Stderr: Container roottestrestorereplica-gw8-zoo1-1 Removed Stderr: Container roottestrestorereplica-gw8-zoo3-1 Removed Stderr: Container roottestrestorereplica-gw8-zoo2-1 Removed Stderr: Network roottestrestorereplica-gw8_default Removing Stderr: Network roottestrestorereplica-gw8_default Removed Cleanup called Docker networks for project roottestrestorereplica-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrestorereplica-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrestorereplica-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrestorereplica-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrestorereplica-gw8 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 5 test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped ENV DOCKER_KERBEROS_KDC_TAG 9391ecdee8d7 ENV CLICKHOUSE_TESTS_SERVER_BIN_PATH /clickhouse ENV MSAN_OPTIONS abort_on_error=1 poison_in_dtor=1 ENV JAVA_TOOL_OPTIONS -Djdk.attach.allowAttachSelf=true ENV TSAN_OPTIONS halt_on_error=1 abort_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1 ENV HOSTNAME 2360da140b68 ENV SHLVL 0 ENV HOME /root ENV OLDPWD / ENV DOCKER_HELPER_TAG 5dc43a6382f0 ENV PYTHONUNBUFFERED 1 ENV DOCKER_PYTHON_BOTTLE_TAG caad4729259e ENV UBSAN_OPTIONS print_stacktrace=1 ENV PYTEST_ADDOPTS --dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_postgresql_database_engine/test.py::test_datetime test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl test_postgresql_database_engine/test.py::test_postgres_database_old_syntax test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl test_postgresql_database_engine/test.py::test_postgresql_database_with_schema test_postgresql_database_engine/test.py::test_postgresql_fetch_tables test_postgresql_database_engine/test.py::test_postgresql_password_leak test_postgresql_database_engine/test.py::test_predefined_connection_configuration test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order test_prometheus_endpoint/test.py::test_prometheus_endpoint test_prometheus_protocols/test.py::test_64bit_id test_prometheus_protocols/test.py::test_create_as_table test_prometheus_protocols/test.py::test_custom_id_algorithm test_prometheus_protocols/test.py::test_default test_prometheus_protocols/test.py::test_external_tables test_prometheus_protocols/test.py::test_inner_engines test_prometheus_protocols/test.py::test_read_auth test_prometheus_protocols/test.py::test_remote_write_v1_status_code test_prometheus_protocols/test.py::test_tags_to_columns test_range_hashed_dictionary_types/test.py::test_range_hashed_dict test_read_only_table/test.py::test_restart_zookeeper test_recompression_ttl/test.py::test_recompression_multiple_ttls test_recompression_ttl/test.py::test_recompression_replicated test_recompression_ttl/test.py::test_recompression_simple test_recovery_time_metric/test.py::test_recovery_time_metric test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db test_refreshable_mv/test.py::test_refreshable_mv_in_system_db test_relative_filepath/test.py::test_filepath test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers test_reload_certificate/test.py::test_ECcert_reload test_reload_certificate/test.py::test_cert_with_pass_phrase test_reload_certificate/test.py::test_chain_reload test_reload_certificate/test.py::test_first_than_second_cert test_reload_clusters_config/test.py::test_add_cluster test_reload_clusters_config/test.py::test_delete_cluster test_reload_clusters_config/test.py::test_simple_reload test_reload_clusters_config/test.py::test_update_one_cluster test_reloading_settings_from_users_xml/test.py::test_force_reload test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3_plain]' test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4]' test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format test_render_log_file_name_templates/test.py::test_check_file_names test_replica_can_become_leader/test.py::test_can_become_leader test_replica_is_active/test.py::test_replica_is_active test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped test_replicating_constants/test.py::test_different_versions test_replication_credentials/test.py::test_credentials_and_no_credentials test_replication_credentials/test.py::test_different_credentials test_replication_credentials/test.py::test_no_credentials test_replication_credentials/test.py::test_same_credentials test_replication_without_zookeeper/test.py::test_startup_without_zookeeper test_restart_server/test.py::test_drop_memory_database test_restart_server/test.py::test_flushes_async_insert_queue test_restore_replica/test.py::test_restore_replica_alive_replicas test_restore_replica/test.py::test_restore_replica_invalid_tables test_restore_replica/test.py::test_restore_replica_parallel test_restore_replica/test.py::test_restore_replica_sequential test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop test_rocksdb_read_only/test.py::test_read_only test_role/test.py::test_admin_option test_role/test.py::test_changing_default_roles_affects_new_sessions_only test_role/test.py::test_combine_privileges test_role/test.py::test_create_role test_role/test.py::test_function_current_roles test_role/test.py::test_grant_role_to_role test_role/test.py::test_introspection test_role/test.py::test_revoke_requires_admin_option 'test_role/test.py::test_role_expiration[False]' 'test_role/test.py::test_role_expiration[True]' test_role/test.py::test_roles_cache test_role/test.py::test_set_role test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 'test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header]' test_s3_cluster/test.py::test_ambiguous_join test_s3_cluster/test.py::test_cluster_default_expression test_s3_cluster/test.py::test_cluster_format_detection test_s3_cluster/test.py::test_cluster_with_header test_s3_cluster/test.py::test_cluster_with_named_collection test_s3_cluster/test.py::test_count test_s3_cluster/test.py::test_count_macro test_s3_cluster/test.py::test_distributed_insert_select_with_replicated -vvv -ss ENV CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH /clickhouse-library-bridge ENV COMPOSE_HTTP_TIMEOUT 600 ENV DOCKER_MYSQL_PHP_CLIENT_TAG 88be89c1e3b6 ENV DOCKER_DOTNET_CLIENT_TAG 11de0b29a15d ENV CLICKHOUSE_TESTS_CLIENT_BIN_PATH /clickhouse ENV DOCKER_MYSQL_JS_CLIENT_TAG 41ba7c2ec2a1 ENV PATH /spark-3.3.2-bin-hadoop3/bin:/opt/gdb/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ENV DOCKER_KERBERIZED_HADOOP_TAG latest ENV DOCKER_CHANNEL stable ENV DOCKER_CLIENT_TIMEOUT 300 ENV DOCKER_POSTGRESQL_JAVA_CLIENT_TAG a4eff5c7f4d6 ENV DOCKER_NGINX_DAV_TAG b55ac9cd7519 ENV DOCKER_MYSQL_GOLANG_CLIENT_TAG 9bec2a638e6e ENV PWD /ClickHouse/tests/integration ENV DOCKER_MYSQL_JAVA_CLIENT_TAG 766bff31cfe4 ENV CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH /clickhouse-odbc-bridge ENV CLICKHOUSE_TESTS_BASE_CONFIG_DIR /clickhouse-config ENV TZ Etc/UTC ENV JAVA_PATH /usr/lib/jvm/java-11-openjdk-amd64/bin/java ENV DOCKER_BASE_TAG 8b2301119731 ENV SPARK_HOME /spark-3.3.2-bin-hadoop3 ENV LC_CTYPE C.UTF-8 ENV INTEGRATION_TESTS_RUN_ID 0 ENV WORKER_FREE_PORTS 30400 30401 30402 30403 30404 30405 30406 30407 30408 30409 30410 30411 30412 30413 30414 30415 30416 30417 30418 30419 30420 30421 30422 30423 30424 30425 30426 30427 30428 30429 30430 30431 30432 30433 30434 30435 30436 30437 30438 30439 30440 30441 30442 30443 30444 30445 30446 30447 30448 30449 ENV PYTEST_XDIST_TESTRUNUID 3b02e9a4cb3e459b869a34972bebb8ec ENV PYTEST_XDIST_WORKER gw8 ENV PYTEST_XDIST_WORKER_COUNT 10 ENV PYTEST_CURRENT_TEST test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped (setup) CLUSTER INIT base_config_dir:/clickhouse-config clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Setup Keeper Cluster name: project_name:roottestreplicatedzerocopyprojectionmutation-gw8. Added instance name:node1 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env', '--project-name', 'roottestreplicatedzerocopyprojectionmutation-gw8', '--file', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log Cluster name: project_name:roottestreplicatedzerocopyprojectionmutation-gw8. Added instance name:node2 tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env', '--project-name', 'roottestreplicatedzerocopyprojectionmutation-gw8', '--file', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml', '--file', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ Running tests in /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/test.py Cluster start called. is_up=False Docker networks for project roottestreplicatedzerocopyprojectionmutation-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicatedzerocopyprojectionmutation-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicatedzerocopyprojectionmutation-gw8 are DRIVER VOLUME NAME Cleanup called Executing query GRANT SELECT(x10) ON tbl TO R3 on instance Docker networks for project roottestreplicatedzerocopyprojectionmutation-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicatedzerocopyprojectionmutation-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicatedzerocopyprojectionmutation-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicatedzerocopyprojectionmutation-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Command:[docker compose --env-file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/.env --project-name roottestrestartserver-gw5 --file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/docker-compose.yml stop --timeout 20] [gw5] PASSED test_restart_server/test.py::test_flushes_async_insert_queue Unstopped containers: {} No running containers for project: roottestreplicatedzerocopyprojectionmutation-gw8 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 5 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/database Setup logs dir /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/database Setup logs dir /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper3/coordination', 'MINIO_CERTS_DIR': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/minio/certs', 'MINIO_DATA_DIR': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/minio/data', 'MINIO_PORT': '9001', 'SSL_CERT_FILE': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/minio/certs/public.crt', 'RESOLVER_LOGS': '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/resolver', 'RESOLVER_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env --project-name roottestreplicatedzerocopyprojectionmutation-gw8 --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/docker-compose.yml pull] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query select 20 on node1 run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1622 Clickhouse process running. run container_id:roottestrocksdbreadonly-gw2-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrocksdbreadonly-gw2-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Stdout:1622 Executing query select 20 on node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Stdout: PID TTY TIME CMD Stdout: 2432 ? 00:00:02 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:2432 Stderr: Container roottestrestartserver-gw5-node-1 Stopping Stderr: Container roottestrestartserver-gw5-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/.env --project-name roottestrestartserver-gw5 --file /ClickHouse/tests/integration/test_restart_server/_instances-0-gw5/node/docker-compose.yml down --volumes] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query select 20 on node Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query GRANT SELECT(x8) ON tbl TO R3 on instance Executing query INSERT INTO test (key, value) VALUES (1, 'b'); on node Stderr: Container roottestrestartserver-gw5-node-1 Stopping Stderr: Container roottestrestartserver-gw5-node-1 Stopped Stderr: Container roottestrestartserver-gw5-node-1 Removing Stderr: Container roottestrestartserver-gw5-node-1 Removed Stderr: Network roottestrestartserver-gw5_default Removing Stderr: Network roottestrestartserver-gw5_default Removed Cleanup called Docker networks for project roottestrestartserver-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrestartserver-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrestartserver-gw5 are DRIVER VOLUME NAME Stderr: Network roottestprometheusendpoint-gw0_default Creating Command:[docker container list --all --filter name='^/roottestrestartserver-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Stderr: Network roottestprometheusendpoint-gw0_default Created Stderr: Container roottestprometheusendpoint-gw0-node-1 Creating Stderr: Container roottestprometheusendpoint-gw0-node-1 Created Stderr: Container roottestprometheusendpoint-gw0-node-1 Starting Stderr: Container roottestprometheusendpoint-gw0-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestprometheusendpoint-gw0-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestprometheusendpoint-gw0-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.2.2... http://localhost:None "GET /v1.46/containers/roottestprometheusendpoint-gw0-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Unstopped containers: {} No running containers for project: roottestrestartserver-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 5 Running tests in /ClickHouse/tests/integration/test_read_only_table/test.py Cluster start called. is_up=False test_read_only_table/test.py::test_restart_zookeeper Executing query CREATE ROLE R1 on instance http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Docker networks for project roottestreadonlytable-gw5 are NETWORK ID NAME DRIVER SCOPE Executing query SELECT * FROM test; on node Docker containers for project roottestreadonlytable-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreadonlytable-gw5 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestreadonlytable-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreadonlytable-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreadonlytable-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreadonlytable-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Unstopped containers: {} No running containers for project: roottestreadonlytable-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] Stdout:2432 Stdout:Total reclaimed space: 0B Volumes pruned: 5 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/database Setup logs dir /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/database Setup logs dir /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node3 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/configs/config.d Setup database dir /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/database Setup logs dir /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/.env --project-name roottestreadonlytable-gw5 --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/docker-compose.yml pull] Executing query CREATE ROLE R2 on instance http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query DROP TABLE test; on node http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Executing query GRANT R1 TO A on instance [gw2] PASSED test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop test_rocksdb_read_only/test.py::test_read_only Executing query CREATE TABLE test (key UInt64, value String) Engine=EmbeddedRocksDB(0, '/var/lib/clickhouse/store/test_rocksdb_read_only', 1) PRIMARY KEY(key); on node http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Executing query CREATE TABLE test (key UInt64, value String) Engine=EmbeddedRocksDB(0, '/var/lib/clickhouse/store/test_rocksdb_read_only') PRIMARY KEY(key); INSERT INTO test (key, value) VALUES (0, 'a'), (1, 'b'), (2, 'c'); on node http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Executing query CREATE TABLE test_fail (key UInt64, value String) Engine=EmbeddedRocksDB(0, '/var/lib/clickhouse/store/test_rocksdb_read_only') PRIMARY KEY(key); on node http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None Executing query CREATE TABLE test_fail (key UInt64, value String) Engine=EmbeddedRocksDB(10, '/var/lib/clickhouse/store/test_rocksdb_read_only') PRIMARY KEY(key); on node Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:2432 http://localhost:None "GET /v1.46/containers/c3e62bdd2575a79675efc7c38ca6a2ccb33ceee84aef7e5bf446fbe372e0c571/json HTTP/1.1" 200 None ClickHouse node started Starting new HTTP connection (1): 172.16.2.2:8001 Executing query CREATE TABLE test_1 (key UInt64, value String) Engine=EmbeddedRocksDB(0, '/var/lib/clickhouse/store/test_rocksdb_read_only', 1) PRIMARY KEY(key); DROP TABLE test_1; on node Executing query GRANT R2 TO B on instance Executing query CREATE TABLE test_2 (key UInt64, value String) Engine=EmbeddedRocksDB(10, '/var/lib/clickhouse/store/test_rocksdb_read_only', 1) PRIMARY KEY(key); DROP TABLE test_2; on node Executing query GRANT SELECT(x7) ON tbl TO R3 on instance Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query DROP TABLE test; CREATE TABLE test (key UInt64, value String) Engine=EmbeddedRocksDB(10, '/var/lib/clickhouse/store/test_rocksdb_read_only', 1) PRIMARY KEY(key); on node Starting new HTTP connection (1): 172.16.2.2:8001 http://172.16.2.2:8001 "GET /metrics HTTP/1.1" 200 None Executing query SELECT 1 on node Executing query SELECT count() FROM test; on node Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query SELECT 2 on node Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query INSERT INTO test (key, value) VALUES (4, 'd'); on node Executing query SELECT 3 on node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:2432 Executing query DROP TABLE test; on node Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query GRANT SELECT(x9) ON tbl TO R1 on instance Starting new HTTP connection (1): 172.16.2.2:8001 http://172.16.2.2:8001 "GET /metrics HTTP/1.1" 200 None Executing query SELECT throwIf(1, 'test', toInt16(42)) SETTINGS allow_custom_error_code_in_throwif=1 on node Command:[docker compose --env-file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/.env --project-name roottestrocksdbreadonly-gw2 --file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/docker-compose.yml stop --timeout 20] [gw2] PASSED test_rocksdb_read_only/test.py::test_read_only Executing query GRANT R3 TO R2 on instance Starting new HTTP connection (1): 172.16.2.2:8001 http://172.16.2.2:8001 "GET /metrics HTTP/1.1" 200 None Command:[docker compose --env-file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/.env --project-name roottestprometheusendpoint-gw0 --file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/docker-compose.yml stop --timeout 20] [gw0] PASSED test_prometheus_endpoint/test.py::test_prometheus_endpoint Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Stderr: Container roottestrocksdbreadonly-gw2-node-1 Stopping Stderr: Container roottestrocksdbreadonly-gw2-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/.env --project-name roottestrocksdbreadonly-gw2 --file /ClickHouse/tests/integration/test_rocksdb_read_only/_instances-0-gw2/node/docker-compose.yml down --volumes] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Connection dropped: socket connection error: No route to host Connection dropped: socket connection error: No route to host Connection dropped: socket connection error: No route to host Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Stdout:2432 Stderr: Container roottestrocksdbreadonly-gw2-node-1 Stopping Stderr: Container roottestrocksdbreadonly-gw2-node-1 Stopped Stderr: Container roottestrocksdbreadonly-gw2-node-1 Removing Stderr: Container roottestrocksdbreadonly-gw2-node-1 Removed Stderr: Network roottestrocksdbreadonly-gw2_default Removing Stderr: Network roottestrocksdbreadonly-gw2_default Removed Cleanup called Executing query GRANT SELECT(x5) ON tbl TO R3 on instance Docker networks for project roottestrocksdbreadonly-gw2 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrocksdbreadonly-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrocksdbreadonly-gw2 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrocksdbreadonly-gw2-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrocksdbreadonly-gw2 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 5 Running tests in /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/test.py Cluster start called. is_up=False test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order Docker networks for project roottestprofilesettingsandconstraintsorder-gw2 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestprofilesettingsandconstraintsorder-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Docker volumes for project roottestprofilesettingsandconstraintsorder-gw2 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestprofilesettingsandconstraintsorder-gw2 are NETWORK ID NAME DRIVER SCOPE Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Docker containers for project roottestprofilesettingsandconstraintsorder-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query GRANT SELECT(x1) ON tbl TO R3 on instance Docker volumes for project roottestprofilesettingsandconstraintsorder-gw2 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestprofilesettingsandconstraintsorder-gw2-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestprofilesettingsandconstraintsorder-gw2 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 5 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/database Setup logs dir /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/database Setup logs dir /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/.env --project-name roottestprofilesettingsandconstraintsorder-gw2 --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/docker-compose.yml pull] Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:2432 Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query SELECT * FROM viewIfPermitted(SELECT x1 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x2 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x3 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x4 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x5 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x6 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x7 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x8 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x9 AS c FROM tbl ELSE null('c Int64')) UNION ALL SELECT * FROM viewIfPermitted(SELECT x10 AS c FROM tbl ELSE null('c Int64')) on instance Executing query DROP USER A, B, C on instance Executing query DROP ROLE R3, R1, R2 on instance Executing query DROP TABLE tbl on instance Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query DROP USER IF EXISTS A, B on instance [gw1] PASSED test_role/test.py::test_roles_cache run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/exec/8feb067570f7659941c73d9358204b1718147108329a52323945508e695a1c00/json HTTP/1.1" 200 584 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/df7033a59dcd1546c744f2d2929fbfdbb68ef2261319c16594097914089b3637/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/df7033a59dcd1546c744f2d2929fbfdbb68ef2261319c16594097914089b3637/json HTTP/1.1" 200 586 Executing query DROP ROLE IF EXISTS R1, R2, R3, R4 on instance Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Stderr: Container roottestprometheusendpoint-gw0-node-1 Stopping Stderr: Container roottestprometheusendpoint-gw0-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/.env --project-name roottestprometheusendpoint-gw0 --file /ClickHouse/tests/integration/test_prometheus_endpoint/_instances-0-gw0/node/docker-compose.yml down --volumes] test_role/test.py::test_set_role Executing query CREATE USER A on instance Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query CREATE ROLE R1, R2 on instance Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query GRANT R1, R2 TO A on instance Stderr: Container roottestprometheusendpoint-gw0-node-1 Stopping Stderr: Container roottestprometheusendpoint-gw0-node-1 Stopped Stderr: Container roottestprometheusendpoint-gw0-node-1 Removing Stderr: Container roottestprometheusendpoint-gw0-node-1 Removed Stderr: Network roottestprometheusendpoint-gw0_default Removing Stderr: Network roottestprometheusendpoint-gw0_default Removed Cleanup called Docker networks for project roottestprometheusendpoint-gw0 are NETWORK ID NAME DRIVER SCOPE Executing query SHOW CURRENT ROLES on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Docker containers for project roottestprometheusendpoint-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestprometheusendpoint-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestprometheusendpoint-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] http://172.16.3.2:8123 "GET /?session_id=session+%234&query=SHOW+CURRENT+ROLES HTTP/1.1" 200 None Unstopped containers: {} No running containers for project: roottestprometheusendpoint-gw0 Trying to prune unused networks... Executing query SET ROLE R1 on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 Trying to prune unused images... Command:[docker image prune -f] http://172.16.3.2:8123 "GET /?session_id=session+%234&query=SET+ROLE+R1 HTTP/1.1" 200 None Executing query SHOW CURRENT ROLES on instance via HTTP interface Stdout:3272 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Starting new HTTP connection (1): 172.16.3.2:8123 Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] http://172.16.3.2:8123 "GET /?session_id=session+%234&query=SHOW+CURRENT+ROLES HTTP/1.1" 200 None Stdout:Total reclaimed space: 0B Volumes pruned: 5 test_range_hashed_dictionary_types/test.py::test_range_hashed_dict Running tests in /ClickHouse/tests/integration/test_range_hashed_dictionary_types/test.py Cluster start called. is_up=False Stdout:3272 Executing query select 20 on node1 Executing query SET ROLE R2 on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.3.2:8123 "GET /?session_id=session+%234&query=SET+ROLE+R2 HTTP/1.1" 200 None Executing query SHOW CURRENT ROLES on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 Docker networks for project roottestrangehasheddictionarytypes-gw0 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrangehasheddictionarytypes-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES http://172.16.3.2:8123 "GET /?session_id=session+%234&query=SHOW+CURRENT+ROLES HTTP/1.1" 200 None Docker volumes for project roottestrangehasheddictionarytypes-gw0 are DRIVER VOLUME NAME Cleanup called Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Docker networks for project roottestrangehasheddictionarytypes-gw0 are NETWORK ID NAME DRIVER SCOPE Executing query SET ROLE NONE on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.3.2:8123 "GET /?session_id=session+%234&query=SET+ROLE+NONE HTTP/1.1" 200 None Executing query SHOW CURRENT ROLES on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 Docker containers for project roottestrangehasheddictionarytypes-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrangehasheddictionarytypes-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrangehasheddictionarytypes-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrangehasheddictionarytypes-gw0 Trying to prune unused networks... http://172.16.3.2:8123 "GET /?session_id=session+%234&query=SHOW+CURRENT+ROLES HTTP/1.1" 200 None Executing query SET ROLE DEFAULT on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 http://172.16.3.2:8123 "GET /?session_id=session+%234&query=SET+ROLE+DEFAULT HTTP/1.1" 200 None Executing query SHOW CURRENT ROLES on instance via HTTP interface Starting new HTTP connection (1): 172.16.3.2:8123 Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] http://172.16.3.2:8123 "GET /?session_id=session+%234&query=SHOW+CURRENT+ROLES HTTP/1.1" 200 None Stdout:5 Command:[docker volume prune -f] Executing query DROP USER IF EXISTS A, B on instance [gw1] PASSED test_role/test.py::test_set_role Stdout:Total reclaimed space: 0B Volumes pruned: 5 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/database Setup logs dir /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/.env --project-name roottestrangehasheddictionarytypes-gw0 --file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/docker-compose.yml pull] Executing query DROP ROLE IF EXISTS R1, R2, R3, R4 on instance Command:[docker compose --env-file /ClickHouse/tests/integration/test_role/_instances-0-gw1/.env --project-name roottestrole-gw1 --file /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/docker-compose.yml stop --timeout 20] Executing query select 20 on node1 Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query SELECT default_compression_codec FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node1 Executing query SELECT name FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node2 Executing query select 20 on node1 Executing query SELECT default_compression_codec FROM system.parts where name = 'all_0_0_1' and table = 'recompression_replicated' on node2 [gw6] PASSED test_recompression_ttl/test.py::test_recompression_replicated test_recompression_ttl/test.py::test_recompression_simple Executing query CREATE TABLE table_for_recompression (d DateTime, key UInt64, data String) ENGINE MergeTree() ORDER BY tuple() TTL d + INTERVAL 10 SECOND RECOMPRESS CODEC(ZSTD(10)) SETTINGS merge_with_recompression_ttl_timeout = 0 on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 3272 ? 00:00:02 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3272 Executing query INSERT INTO table_for_recompression VALUES (now(), 1, '1') on node1 Executing query SELECT default_compression_codec FROM system.parts where name = 'all_1_1_0' on node1 Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 Stderr: zoo2 Skipped - Image is already being pulled by zoo1 Stderr: zoo3 Skipped - Image is already being pulled by zoo1 Stderr: node2 Skipped - Image is already being pulled by zoo1 Stderr: node3 Skipped - Image is already being pulled by zoo1 Stderr: node1 Skipped - Image is already being pulled by zoo1 Stderr: zoo1 Pulling Stderr: zoo1 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper1/log', '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper1/config', '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper1/coordination', '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper2/log', '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper2/config', '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper2/coordination', '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper3/log', '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper3/config', '/ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/keeper3/coordination'] Command:[docker compose --project-name roottestreadonlytable-gw5 --env-file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr: node1 Pulling Stderr: node1 Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/.env --project-name roottestrangehasheddictionarytypes-gw0 --file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/.env --project-name roottestrangehasheddictionarytypes-gw0 --file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/docker-compose.yml up -d --no-recreate] Stderr: node2 Skipped - Image is already being pulled by node1 Stderr: node1 Pulling Stderr: node1 Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/.env --project-name roottestprofilesettingsandconstraintsorder-gw2 --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/.env --project-name roottestprofilesettingsandconstraintsorder-gw2 --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/docker-compose.yml up -d --no-recreate] Stderr: zoo1 Skipped - Image is already being pulled by node2 Stderr: proxy1 Skipped - Image is already being pulled by proxy2 Stderr: zoo2 Skipped - Image is already being pulled by node2 Stderr: node1 Skipped - Image is already being pulled by node2 Stderr: zoo3 Skipped - Image is already being pulled by node2 Stderr: minio1 Pulling Stderr: proxy2 Pulling Stderr: node2 Pulling Stderr: resolver Pulling Stderr: proxy2 Pulled Stderr: minio1 Pulled Stderr: resolver Pulled Stderr: node2 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper1/log', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper1/config', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper1/coordination', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper2/log', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper2/config', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper2/coordination', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper3/log', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper3/config', '/ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/keeper3/coordination'] Command:[docker compose --project-name roottestreplicatedzerocopyprojectionmutation-gw8 --env-file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3272 Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 Stderr:time="2025-04-02T03:20:43Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestreadonlytable-gw5_default Creating Stderr: Network roottestreadonlytable-gw5_default Created Stderr: Container roottestreadonlytable-gw5-zoo1-1 Creating Stderr: Container roottestreadonlytable-gw5-zoo2-1 Creating Stderr: Container roottestreadonlytable-gw5-zoo3-1 Creating Stderr: Container roottestreadonlytable-gw5-zoo1-1 Created Stderr: Container roottestreadonlytable-gw5-zoo3-1 Created Stderr: Container roottestreadonlytable-gw5-zoo2-1 Created Stderr: Container roottestreadonlytable-gw5-zoo3-1 Starting Stderr: Container roottestreadonlytable-gw5-zoo1-1 Starting Stderr: Container roottestreadonlytable-gw5-zoo2-1 Starting Stderr: Container roottestreadonlytable-gw5-zoo1-1 Started Stderr: Container roottestreadonlytable-gw5-zoo3-1 Started Stderr: Container roottestreadonlytable-gw5-zoo2-1 Started Stderr:time="2025-04-02T03:20:44Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:20:44Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.2.3, port:2181, use_ssl:False Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stderr: Network roottestrangehasheddictionarytypes-gw0_default Creating Stderr: Network roottestrangehasheddictionarytypes-gw0_default Created Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Creating Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Created Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Starting Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestrangehasheddictionarytypes-gw0-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestrangehasheddictionarytypes-gw0-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.5.2... http://localhost:None "GET /v1.46/containers/roottestrangehasheddictionarytypes-gw0-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Network roottestprofilesettingsandconstraintsorder-gw2_default Creating Stderr: Network roottestprofilesettingsandconstraintsorder-gw2_default Created Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Creating Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Creating Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Created Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Created Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Starting Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Starting Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Started Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestprofilesettingsandconstraintsorder-gw2-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestprofilesettingsandconstraintsorder-gw2-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.6.2... http://localhost:None "GET /v1.46/containers/roottestprofilesettingsandconstraintsorder-gw2-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None Stdout:3272 Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/.env --project-name roottestremoteblobsnamingbackwardcompatibility-gw3 --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/new_node/docker-compose.yml --file /ClickHouse/tests/integration/test_remote_blobs_naming/_instances-backward_compatibility-0-gw3/switching_node/docker-compose.yml down --volumes] http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None Stderr:time="2025-04-02T03:20:43Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestreplicatedzerocopyprojectionmutation-gw8_default Creating Stderr: Network roottestreplicatedzerocopyprojectionmutation-gw8_default Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Creating Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Creating Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Creating Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Starting Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Starting Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Starting Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Started Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Started Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Started Stderr:time="2025-04-02T03:20:45Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:20:45Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.7.4, port:2181, use_ssl:False Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None Stdout:3272 http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a453db3c395ebdc033faaa855c3915d1322e6353543a1732ee7eb0c6cbf49c8f/json HTTP/1.1" 200 None ClickHouse node1 started run container_id:roottestrangehasheddictionarytypes-gw0-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '4990954156238030839\t2018-12-31 21:00:00\t2020-12-30 20:59:59\t0.1\tRU' > /var/lib/clickhouse/user_files/rates.tsv"] Command:[docker exec roottestrangehasheddictionarytypes-gw0-node1-1 bash -c echo '4990954156238030839 2018-12-31 21:00:00 2020-12-30 20:59:59 0.1 RU' > /var/lib/clickhouse/user_files/rates.tsv] http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None Executing query CREATE DICTIONARY rates ( hash_id UInt64, start_date DateTime default '0000-00-00 00:00:00', end_date DateTime default '0000-00-00 00:00:00', price Float64, currency String ) PRIMARY KEY hash_id SOURCE(file( path '/var/lib/clickhouse/user_files/rates.tsv' format 'TSV' )) LAYOUT(RANGE_HASHED()) RANGE(MIN start_date MAX end_date) LIFETIME(60); on node1 Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None Executing query SYSTEM RELOAD DICTIONARY default.rates on node1 http://localhost:None "GET /v1.46/containers/a358f29288290268e24cc1cddabee827cb2cf00a9ea16485dde7da442dc92a65/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestprofilesettingsandconstraintsorder-gw2-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestprofilesettingsandconstraintsorder-gw2-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.6.3... http://localhost:None "GET /v1.46/containers/roottestprofilesettingsandconstraintsorder-gw2-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/d8fbcd0724ffaa65b74b5a90f464de4edcb1dfdac26987b170d1c8623b7ef5ba/json HTTP/1.1" 200 None ClickHouse node2 started Executing query SELECT name, readonly FROM system.settings WHERE name == 'log_queries' on node1 Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-node-1 Removed Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-new_node-1 Removed Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-switching_node-1 Removed Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-resolver-1 Removed Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo3-1 Removed Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo1-1 Removed Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-zoo2-1 Removed Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-minio1-1 Removed Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Stopping Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Stopped Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Removing Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy1-1 Removed Stderr: Container roottestremoteblobsnamingbackwardcompatibility-gw3-proxy2-1 Removed Stderr: Volume roottestremoteblobsnamingbackwardcompatibility-gw3_data1-1 Removing Stderr: Network roottestremoteblobsnamingbackwardcompatibility-gw3_default Removing Stderr: Volume roottestremoteblobsnamingbackwardcompatibility-gw3_data1-1 Removed Stderr: Network roottestremoteblobsnamingbackwardcompatibility-gw3_default Removed Cleanup called Docker networks for project roottestremoteblobsnamingbackwardcompatibility-gw3 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestremoteblobsnamingbackwardcompatibility-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestremoteblobsnamingbackwardcompatibility-gw3 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestremoteblobsnamingbackwardcompatibility-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestremoteblobsnamingbackwardcompatibility-gw3 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Executing query SELECT dictGetString('default.rates', 'currency', toUInt64(4990954156238030839), toDateTime('2019-10-01 00:00:00')) on node1 Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_relative_filepath/test.py::test_filepath Running tests in /ClickHouse/tests/integration/test_relative_filepath/test.py Cluster start called. is_up=False Docker networks for project roottestrelativefilepath-gw3 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrelativefilepath-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrelativefilepath-gw3 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestrelativefilepath-gw3 are NETWORK ID NAME DRIVER SCOPE Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 Executing query SELECT name, readonly FROM system.settings WHERE name == 'log_queries' on node2 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Docker containers for project roottestrelativefilepath-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrelativefilepath-gw3 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrelativefilepath-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrelativefilepath-gw3 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Command:[docker compose --env-file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/.env --project-name roottestrangehasheddictionarytypes-gw0 --file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/docker-compose.yml stop --timeout 20] [gw0] PASSED test_range_hashed_dictionary_types/test.py::test_range_hashed_dict Stdout:3 Command:[docker volume prune -f] Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_relative_filepath/configs/config.xml'] to /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/database Setup logs dir /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/.env --project-name roottestrelativefilepath-gw3 --file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/docker-compose.yml pull] Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Command:[docker compose --env-file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/.env --project-name roottestprofilesettingsandconstraintsorder-gw2 --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/docker-compose.yml stop --timeout 20] [gw2] PASSED test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:3272 Stderr: Container roottestrole-gw1-instance-1 Stopping Stderr: Container roottestrole-gw1-instance-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_role/_instances-0-gw1/.env --project-name roottestrole-gw1 --file /ClickHouse/tests/integration/test_role/_instances-0-gw1/instance/docker-compose.yml down --volumes] Connection dropped: socket connection error: Connection refused Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 Stderr: Container roottestrole-gw1-instance-1 Stopping Stderr: Container roottestrole-gw1-instance-1 Stopped Stderr: Container roottestrole-gw1-instance-1 Removing Stderr: Container roottestrole-gw1-instance-1 Removed Stderr: Network roottestrole-gw1_default Removing Stderr: Network roottestrole-gw1_default Removed Cleanup called Docker networks for project roottestrole-gw1 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrole-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrole-gw1 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrole-gw1-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrole-gw1 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_recovery_time_metric/test.py::test_recovery_time_metric Running tests in /ClickHouse/tests/integration/test_recovery_time_metric/test.py Cluster start called. is_up=False run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Docker networks for project roottestrecoverytimemetric-gw1 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrecoverytimemetric-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrecoverytimemetric-gw1 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestrecoverytimemetric-gw1 are NETWORK ID NAME DRIVER SCOPE Stdout:3272 Docker containers for project roottestrecoverytimemetric-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrecoverytimemetric-gw1 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrecoverytimemetric-gw1-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 Unstopped containers: {} No running containers for project: roottestrecoverytimemetric-gw1 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_recovery_time_metric/configs/config.xml'] to /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/database Setup logs dir /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/.env --project-name roottestrecoverytimemetric-gw1 --file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/docker-compose.yml pull] Connecting to 172.16.7.4(172.16.7.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.7.3, port:2181, use_ssl:False Connecting to 172.16.7.3(172.16.7.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.7.2, port:2181, use_ssl:False Connecting to 172.16.7.2(172.16.7.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') Trying to create Minio instance by command docker compose --project-name roottestreplicatedzerocopyprojectionmutation-gw8 --env-file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d Command:[docker compose --project-name roottestreplicatedzerocopyprojectionmutation-gw8 --env-file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --verbose up -d] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/exec/df7033a59dcd1546c744f2d2929fbfdbb68ef2261319c16594097914089b3637/json HTTP/1.1" 200 584 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/630425f37eca0e28ae67580c77ef8fb45a5bf8dc69fabf0e29a1fb35499025d4/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/630425f37eca0e28ae67580c77ef8fb45a5bf8dc69fabf0e29a1fb35499025d4/json HTTP/1.1" 200 586 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.2.4, port:2181, use_ssl:False Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.2.2, port:2181, use_ssl:False Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/.env --project-name roottestreadonlytable-gw5 --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/.env --project-name roottestreadonlytable-gw5 --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/docker-compose.yml up -d --no-recreate] Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Stopping Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/.env --project-name roottestrangehasheddictionarytypes-gw0 --file /ClickHouse/tests/integration/test_range_hashed_dictionary_types/_instances-0-gw0/node1/docker-compose.yml down --volumes] Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Stopping Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Stopping Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Stopped Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/.env --project-name roottestprofilesettingsandconstraintsorder-gw2 --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node1/docker-compose.yml --file /ClickHouse/tests/integration/test_profile_settings_and_constraints_order/_instances-0-gw2/node2/docker-compose.yml down --volumes] Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=2546, time_out=30000, session_id=8, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Stopping Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Stopped Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Removing Stderr: Container roottestrangehasheddictionarytypes-gw0-node1-1 Removed Stderr: Network roottestrangehasheddictionarytypes-gw0_default Removing Stderr: Network roottestrangehasheddictionarytypes-gw0_default Removed Cleanup called Docker networks for project roottestrangehasheddictionarytypes-gw0 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrangehasheddictionarytypes-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Stdout:4110 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Docker volumes for project roottestrangehasheddictionarytypes-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrangehasheddictionarytypes-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrangehasheddictionarytypes-gw0 Trying to prune unused networks... Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 Trying to prune unused images... Command:[docker image prune -f] Stdout:4110 Executing query select 20 on node1 Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 5 Running tests in /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/test.py Cluster start called. is_up=False test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers Docker networks for project roottestreloadauxiliaryzookeepers-gw0 are NETWORK ID NAME DRIVER SCOPE Stderr:time="2025-04-02T03:20:49Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Volume "roottestreplicatedzerocopyprojectionmutation-gw8_data1-1" Creating Stderr: Volume "roottestreplicatedzerocopyprojectionmutation-gw8_data1-1" Created Stderr:time="2025-04-02T03:20:49Z" level=warning msg="Found orphan containers ([roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up." Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Creating Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Creating Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Creating Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Creating Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Starting Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Starting Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Started Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Started Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Starting Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Starting Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Started Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Started Stderr:time="2025-04-02T03:20:50Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:20:50Z" level=debug msg="otel error" error="" Trying to connect to Minio... get_instance_ip instance_name=minio1 http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=proxy1 http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1/json HTTP/1.1" 200 None Starting new HTTP connection (1): 172.16.7.7:9001 Incremented Retry for (url='/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (2): 172.16.7.7:9001 Incremented Retry for (url='/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Starting new HTTP connection (3): 172.16.7.7:9001 Incremented Retry for (url='/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')': / Docker containers for project roottestreloadauxiliaryzookeepers-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Starting new HTTP connection (4): 172.16.7.7:9001 Can't connect to Minio: HTTPConnectionPool(host='172.16.7.7', port=9001): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) Docker volumes for project roottestreloadauxiliaryzookeepers-gw0 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestreloadauxiliaryzookeepers-gw0 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreloadauxiliaryzookeepers-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreloadauxiliaryzookeepers-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreloadauxiliaryzookeepers-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreloadauxiliaryzookeepers-gw0 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:5 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 5 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/database Setup logs dir /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/.env --project-name roottestreloadauxiliaryzookeepers-gw0 --file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml pull] Stderr: Container roottests3accessheaders-gw9-node1-1 Stopping Stderr: Container roottests3accessheaders-gw9-resolver-1 Stopping Stderr: Container roottests3accessheaders-gw9-node1-1 Stopped Stderr: Container roottests3accessheaders-gw9-minio1-1 Stopping Stderr: Container roottests3accessheaders-gw9-minio1-1 Stopped Stderr: Container roottests3accessheaders-gw9-resolver-1 Stopped Stderr: Container roottests3accessheaders-gw9-proxy1-1 Stopping Stderr: Container roottests3accessheaders-gw9-proxy2-1 Stopping Stderr: Container roottests3accessheaders-gw9-proxy1-1 Stopped Stderr: Container roottests3accessheaders-gw9-proxy2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/.env --project-name roottests3accessheaders-gw9 --file /ClickHouse/tests/integration/test_s3_access_headers/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml down --volumes] Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Stopping Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Stopping Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Stopped Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Removing Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Stopped Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Removing Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node1-1 Removed Stderr: Container roottestprofilesettingsandconstraintsorder-gw2-node2-1 Removed Stderr: Network roottestprofilesettingsandconstraintsorder-gw2_default Removing Stderr: Network roottestprofilesettingsandconstraintsorder-gw2_default Removed Cleanup called Docker networks for project roottestprofilesettingsandconstraintsorder-gw2 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestprofilesettingsandconstraintsorder-gw2 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestprofilesettingsandconstraintsorder-gw2 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestprofilesettingsandconstraintsorder-gw2-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestprofilesettingsandconstraintsorder-gw2 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Executing query select 20 on node1 Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 Stderr: Container roottestreadonlytable-gw5-zoo3-1 Running Stderr: Container roottestreadonlytable-gw5-zoo1-1 Running Stderr: Container roottestreadonlytable-gw5-zoo2-1 Running Stderr: Container roottestreadonlytable-gw5-node1-1 Creating Stderr: Container roottestreadonlytable-gw5-node2-1 Creating Stderr: Container roottestreadonlytable-gw5-node3-1 Creating Stderr: Container roottestreadonlytable-gw5-node1-1 Created Stderr: Container roottestreadonlytable-gw5-node2-1 Created Stderr: Container roottestreadonlytable-gw5-node3-1 Created Stderr: Container roottestreadonlytable-gw5-node1-1 Starting Stderr: Container roottestreadonlytable-gw5-node2-1 Starting Stderr: Container roottestreadonlytable-gw5-node3-1 Starting Stderr: Container roottestreadonlytable-gw5-node3-1 Started Stderr: Container roottestreadonlytable-gw5-node1-1 Started Stderr: Container roottestreadonlytable-gw5-node2-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.2.6... http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None Stderr: Container roottests3accessheaders-gw9-resolver-1 Stopping Stderr: Container roottests3accessheaders-gw9-node1-1 Stopping Stderr: Container roottests3accessheaders-gw9-resolver-1 Stopped Stderr: Container roottests3accessheaders-gw9-resolver-1 Removing Stderr: Container roottests3accessheaders-gw9-node1-1 Stopped Stderr: Container roottests3accessheaders-gw9-node1-1 Removing Stderr: Container roottests3accessheaders-gw9-resolver-1 Removed Stderr: Container roottests3accessheaders-gw9-node1-1 Removed Stderr: Container roottests3accessheaders-gw9-minio1-1 Stopping Stderr: Container roottests3accessheaders-gw9-minio1-1 Stopped Stderr: Container roottests3accessheaders-gw9-minio1-1 Removing Stderr: Container roottests3accessheaders-gw9-minio1-1 Removed Stderr: Container roottests3accessheaders-gw9-proxy2-1 Stopping Stderr: Container roottests3accessheaders-gw9-proxy1-1 Stopping Stderr: Container roottests3accessheaders-gw9-proxy2-1 Stopped Stderr: Container roottests3accessheaders-gw9-proxy2-1 Removing Stderr: Container roottests3accessheaders-gw9-proxy1-1 Stopped Stderr: Container roottests3accessheaders-gw9-proxy1-1 Removing Stderr: Container roottests3accessheaders-gw9-proxy1-1 Removed Stderr: Container roottests3accessheaders-gw9-proxy2-1 Removed Stderr: Volume roottests3accessheaders-gw9_data1-1 Removing Stderr: Network roottests3accessheaders-gw9_default Removing Stderr: Volume roottests3accessheaders-gw9_data1-1 Removed Stderr: Network roottests3accessheaders-gw9_default Removed Cleanup called Docker networks for project roottests3accessheaders-gw9 are NETWORK ID NAME DRIVER SCOPE Starting new HTTP connection (5): 172.16.7.7:9001 http://172.16.7.7:9001 "GET / HTTP/1.1" 200 0 Connected to Minio. Docker containers for project roottests3accessheaders-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES http://172.16.7.7:9001 "GET /root?location= HTTP/1.1" 404 0 Docker volumes for project roottests3accessheaders-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottests3accessheaders-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] http://172.16.7.7:9001 "PUT /root HTTP/1.1" 200 0 S3 bucket 'root' created http://172.16.7.7:9001 "GET /root2?location= HTTP/1.1" 404 0 http://172.16.7.7:9001 "PUT /root2 HTTP/1.1" 200 0 S3 bucket 'root2' created ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env --project-name roottestreplicatedzerocopyprojectionmutation-gw8 --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env --project-name roottestreplicatedzerocopyprojectionmutation-gw8 --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/docker-compose.yml up -d --no-recreate] Unstopped containers: {} No running containers for project: roottests3accessheaders-gw9 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=3884, time_out=30000, session_id=9, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Running tests in /ClickHouse/tests/integration/test_replicating_constants/test.py test_replicating_constants/test.py::test_different_versions Cluster start called. is_up=False Docker networks for project roottestreplicatingconstants-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicatingconstants-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None Docker volumes for project roottestreplicatingconstants-gw9 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestreplicatingconstants-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicatingconstants-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicatingconstants-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicatingconstants-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query select 20 on node1 http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None Unstopped containers: {} No running containers for project: roottestreplicatingconstants-gw9 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/database Setup logs dir /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/database Setup logs dir /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/.env --project-name roottestreplicatingconstants-gw9 --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/docker-compose.yml pull] http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None Executing query system refresh view re.a0 on node1 http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Running Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Running Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Running Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Running Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Running Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Running Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Creating Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Running Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Creating Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Created Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Starting Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Starting Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Started Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.7.10... http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None Stdout: PID TTY TIME CMD Stdout: 4110 ? 00:00:02 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None Stdout:4110 http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/067ba44bc61a10c408352866442e272ec3335d94919472798363f2db2892bfce/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.2.7... http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/aceea122c73ef25d4a0ed5076354ceca2eb796e0d537d81c97d6549ab73cda52/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/aceea122c73ef25d4a0ed5076354ceca2eb796e0d537d81c97d6549ab73cda52/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/aceea122c73ef25d4a0ed5076354ceca2eb796e0d537d81c97d6549ab73cda52/json HTTP/1.1" 200 None ClickHouse node2 started get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-node3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-node3-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node3, ip: 172.16.2.5... http://localhost:None "GET /v1.46/containers/roottestreadonlytable-gw5-node3-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f0923d2aae584169c594792c7dc42d298e8ff5230a7dd42276edf337e25936f4/json HTTP/1.1" 200 None ClickHouse node3 started Executing query CREATE TABLE test_table_0(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/0', 'node1') ORDER BY tuple(); on node1 Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None Executing query CREATE TABLE test_table_0(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/0', 'node2') ORDER BY tuple(); on node2 http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=1002, time_out=30000, session_id=7, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None Executing query CREATE TABLE test_table_0(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/0', 'node3') ORDER BY tuple(); on node3 http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4110 Executing query SELECT name FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None Executing query CREATE TABLE test_table_1(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/1', 'node1') ORDER BY tuple(); on node1 http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None Executing query SELECT default_compression_codec FROM system.parts where name = 'all_1_1_1' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/b9562836916565f704e3ae2a3c703ebb4f1eb56b80f235ed867af71fa4c0ef63/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.7.9... http://localhost:None "GET /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/5c200aa20c27233d1554eab0fd5f7aa3fe8e05bbb2902e10707d0aa4e0494c76/json HTTP/1.1" 200 None Executing query CREATE TABLE test_table_1(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/1', 'node2') ORDER BY tuple(); on node2 http://localhost:None "GET /v1.46/containers/5c200aa20c27233d1554eab0fd5f7aa3fe8e05bbb2902e10707d0aa4e0494c76/json HTTP/1.1" 200 None ClickHouse node2 started Starting mock server broken_s3.py run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname broken_s3.py) && echo import http.server
import logging
import random
import socket
import socketserver
import string
import struct
import sys
import threading
import time
import urllib.parse

INF_COUNT = 100000000


def _and_then(value, func):
    assert callable(func)
    return None if value is None else func(value)


class MockControl:
    def __init__(self, cluster, container, port):
        self._cluster = cluster
        self._container = container
        self._port = port

    def reset(self):
        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            [
                "curl",
                "-s",
                f"http://localhost:{self._port}/mock_settings/reset",
            ],
            nothrow=True,
        )
        assert response == "OK", response

    def setup_action(self, when, count=None, after=None, action=None, action_args=None):
        url = f"http://localhost:{self._port}/mock_settings/{when}?nothing=1"

        if count is not None:
            url += f"&count={count}"

        if after is not None:
            url += f"&after={after}"

        if action is not None:
            url += f"&action={action}"

        if action_args is not None:
            for x in action_args:
                url += f"&action_args={x}"

        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            [
                "curl",
                "-s",
                url,
            ],
            nothrow=True,
        )
        assert response == "OK", response

    def setup_at_object_upload(self, **kwargs):
        self.setup_action("at_object_upload", **kwargs)

    def setup_at_part_upload(self, **kwargs):
        self.setup_action("at_part_upload", **kwargs)

    def setup_at_create_multi_part_upload(self, **kwargs):
        self.setup_action("at_create_multi_part_upload", **kwargs)

    def setup_fake_puts(self, part_length):
        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            [
                "curl",
                "-s",
                f"http://localhost:{self._port}/mock_settings/fake_puts?when_length_bigger={part_length}",
            ],
            nothrow=True,
        )
        assert response == "OK", response

    def setup_fake_multpartuploads(self):
        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            [
                "curl",
                "-s",
                f"http://localhost:{self._port}/mock_settings/setup_fake_multpartuploads?",
            ],
            nothrow=True,
        )
        assert response == "OK", response

    def setup_slow_answers(
        self, minimal_length=0, timeout=None, probability=None, count=None
    ):
        url = (
            f"http://localhost:{self._port}/"
            f"mock_settings/slow_put"
            f"?minimal_length={minimal_length}"
        )

        if timeout is not None:
            url += f"&timeout={timeout}"

        if probability is not None:
            url += f"&probability={probability}"

        if count is not None:
            url += f"&count={count}"

        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            ["curl", "-s", url],
            nothrow=True,
        )
        assert response == "OK", response


class _ServerRuntime:
    class SlowPut:
        def __init__(
            self,
            lock,
            probability_=None,
            timeout_=None,
            minimal_length_=None,
            count_=None,
        ):
            self.lock = lock
            self.probability = probability_ if probability_ is not None else 1
            self.timeout = timeout_ if timeout_ is not None else 0.1
            self.minimal_length = minimal_length_ if minimal_length_ is not None else 0
            self.count = count_ if count_ is not None else INF_COUNT

        def __str__(self):
            return (
                f"probability:{self.probability}"
                f" timeout:{self.timeout}"
                f" minimal_length:{self.minimal_length}"
                f" count:{self.count}"
            )

        def get_timeout(self, content_length):
            with self.lock:
                if content_length > self.minimal_length:
                    if self.count > 0:
                        if (
                            _runtime.slow_put.probability == 1
                            or random.random() <= _runtime.slow_put.probability
                        ):
                            self.count -= 1
                            return _runtime.slow_put.timeout
            return None

    class Expected500ErrorAction:
        def inject_error(self, request_handler):
            data = (
                '<?xml version="1.0" encoding="UTF-8"?>'
                "<Error>"
                "<Code>ExpectedError</Code>"
                "<Message>mock s3 injected unretryable error</Message>"
                "<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
                "</Error>"
            )
            request_handler.write_error(500, data)

    class SlowDownAction:
        def inject_error(self, request_handler):
            data = (
                '<?xml version="1.0" encoding="UTF-8"?>'
                "<Error>"
                "<Code>SlowDown</Code>"
                "<Message>Slow Down.</Message>"
                "<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
                "</Error>"
            )
            request_handler.write_error(429, data)

    # make sure that Alibaba errors (QpsLimitExceeded, TotalQpsLimitExceededAction) are retriable
    # we patched contrib/aws to achive it: https://github.com/ClickHouse/aws-sdk-cpp/pull/22 https://github.com/ClickHouse/aws-sdk-cpp/pull/23
    # https://www.alibabacloud.com/help/en/oss/support/http-status-code-503
    class QpsLimitExceededAction:
        def inject_error(self, request_handler):
            data = (
                '<?xml version="1.0" encoding="UTF-8"?>'
                "<Error>"
                "<Code>QpsLimitExceeded</Code>"
                "<Message>Please reduce your request rate.</Message>"
                "<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
                "</Error>"
            )
            request_handler.write_error(429, data)

    class TotalQpsLimitExceededAction:
        def inject_error(self, request_handler):
            data = (
                '<?xml version="1.0" encoding="UTF-8"?>'
                "<Error>"
                "<Code>TotalQpsLimitExceeded</Code>"
                "<Message>Please reduce your request rate.</Message>"
                "<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
                "</Error>"
            )
            request_handler.write_error(429, data)

    class RedirectAction:
        def __init__(self, host="localhost", port=1):
            self.dst_host = _and_then(host, str)
            self.dst_port = _and_then(port, int)

        def inject_error(self, request_handler):
            request_handler.redirect(host=self.dst_host, port=self.dst_port)

    class ConnectionResetByPeerAction:
        def __init__(self, with_partial_data=None):
            self.partial_data = ""
            if with_partial_data is not None and with_partial_data == "1":
                self.partial_data = (
                    '<?xml version="1.0" encoding="UTF-8"?>\n'
                    "<InitiateMultipartUploadResult>\n"
                )

        def inject_error(self, request_handler):
            request_handler.read_all_input()

            if self.partial_data:
                request_handler.send_response(200)
                request_handler.send_header("Content-Type", "text/xml")
                request_handler.send_header("Content-Length", 10000)
                request_handler.end_headers()
                request_handler.wfile.write(bytes(self.partial_data, "UTF-8"))

            time.sleep(1)
            request_handler.connection.setsockopt(
                socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 1, 0)
            )
            request_handler.connection.close()

    class BrokenPipeAction:
        def inject_error(self, request_handler):
            # partial read
            self.rfile.read(50)

            time.sleep(1)
            request_handler.connection.setsockopt(
                socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 1, 0)
            )
            request_handler.connection.close()

    class ConnectionRefusedAction(RedirectAction):
        pass

    class CountAfter:
        def __init__(
            self, lock, count_=None, after_=None, action_=None, action_args_=[]
        ):
            self.lock = lock

            self.count = count_ if count_ is not None else INF_COUNT
            self.after = after_ if after_ is not None else 0
            self.action = action_
            self.action_args = action_args_

            if self.action == "connection_refused":
                self.error_handler = _ServerRuntime.ConnectionRefusedAction()
            elif self.action == "connection_reset_by_peer":
                self.error_handler = _ServerRuntime.ConnectionResetByPeerAction(
                    *self.action_args
                )
            elif self.action == "broken_pipe":
                self.error_handler = _ServerRuntime.BrokenPipeAction()
            elif self.action == "redirect_to":
                self.error_handler = _ServerRuntime.RedirectAction(*self.action_args)
            elif self.action == "slow_down":
                self.error_handler = _ServerRuntime.SlowDownAction(*self.action_args)
            elif self.action == "qps_limit_exceeded":
                self.error_handler = _ServerRuntime.QpsLimitExceededAction(
                    *self.action_args
                )
            elif self.action == "total_qps_limit_exceeded":
                self.error_handler = _ServerRuntime.TotalQpsLimitExceededAction(
                    *self.action_args
                )
            else:
                self.error_handler = _ServerRuntime.Expected500ErrorAction()

        @staticmethod
        def from_cgi_params(lock, params):
            return _ServerRuntime.CountAfter(
                lock=lock,
                count_=_and_then(params.get("count", [None])[0], int),
                after_=_and_then(params.get("after", [None])[0], int),
                action_=params.get("action", [None])[0],
                action_args_=params.get("action_args", []),
            )

        def __str__(self):
            return f"count:{self.count} after:{self.after} action:{self.action} action_args:{self.action_args}"

        def has_effect(self):
            with self.lock:
                if self.after:
                    self.after -= 1
                if self.after == 0:
                    if self.count:
                        self.count -= 1
                        return True
                return False

        def inject_error(self, request_handler):
            self.error_handler.inject_error(request_handler)

    def __init__(self):
        self.lock = threading.Lock()
        self.at_part_upload = None
        self.at_object_upload = None
        self.fake_put_when_length_bigger = None
        self.fake_uploads = dict()
        self.slow_put = None
        self.fake_multipart_upload = None
        self.at_create_multi_part_upload = None

    def register_fake_upload(self, upload_id, key):
        with self.lock:
            self.fake_uploads[upload_id] = key

    def is_fake_upload(self, upload_id, key):
        with self.lock:
            if upload_id in self.fake_uploads:
                return self.fake_uploads[upload_id] == key
        return False

    def reset(self):
        with self.lock:
            self.at_part_upload = None
            self.at_object_upload = None
            self.fake_put_when_length_bigger = None
            self.fake_uploads = dict()
            self.slow_put = None
            self.fake_multipart_upload = None
            self.at_create_multi_part_upload = None


_runtime = _ServerRuntime()


def get_random_string(length):
    # choose from all lowercase letter
    letters = string.ascii_lowercase
    result_str = "".join(random.choice(letters) for i in range(length))
    return result_str


class RequestHandler(http.server.BaseHTTPRequestHandler):
    def _ok(self):
        self.send_response(200)
        self.send_header("Content-Type", "text/plain")
        self.end_headers()
        self.wfile.write(b"OK")

    def _ping(self):
        self._ok()

    def read_all_input(self):
        content_length = int(self.headers.get("Content-Length", 0))
        to_read = content_length
        while to_read > 0:
            # read content in order to avoid error on client
            # Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe
            # do it piece by piece in order to avoid big allocation
            size = min(to_read, 1024)
            str(self.rfile.read(size))
            to_read -= size

    def redirect(self, host=None, port=None):
        if host is None and port is None:
            host = self.server.upstream_host
            port = self.server.upstream_port

        self.read_all_input()

        self.send_response(307)
        url = f"http://{host}:{port}{self.path}"
        self.log_message("redirect to %s", url)
        self.send_header("Location", url)
        self.end_headers()
        self.wfile.write(b"Redirected")

    def write_error(self, http_code, data, content_length=None):
        if content_length is None:
            content_length = len(data)
        self.log_message("write_error %s", data)
        self.read_all_input()
        self.send_response(http_code)
        self.send_header("Content-Type", "text/xml")
        self.send_header("Content-Length", str(content_length))
        self.end_headers()
        if data:
            self.wfile.write(bytes(data, "UTF-8"))

    def _fake_put_ok(self):
        self.log_message("fake put")

        self.read_all_input()

        self.send_response(200)
        self.send_header("Content-Type", "text/xml")
        self.send_header("ETag", "b54357faf0632cce46e942fa68356b38")
        self.send_header("Content-Length", 0)
        self.end_headers()

    def _fake_uploads(self, path, upload_id):
        self.read_all_input()

        parts = [x for x in path.split("/") if x]
        bucket = parts[0]
        key = "/".join(parts[1:])
        data = (
            '<?xml version="1.0" encoding="UTF-8"?>\n'
            "<InitiateMultipartUploadResult>\n"
            f"<Bucket>{bucket}</Bucket>"
            f"<Key>{key}</Key>"
            f"<UploadId>{upload_id}</UploadId>"
            "</InitiateMultipartUploadResult>"
        )

        self.send_response(200)
        self.send_header("Content-Type", "text/xml")
        self.send_header("Content-Length", len(data))
        self.end_headers()

        self.wfile.write(bytes(data, "UTF-8"))

    def _fake_post_ok(self, path):
        self.read_all_input()

        parts = [x for x in path.split("/") if x]
        bucket = parts[0]
        key = "/".join(parts[1:])
        location = "http://Example-Bucket.s3.Region.amazonaws.com/" + path
        data = (
            '<?xml version="1.0" encoding="UTF-8"?>\n'
            "<CompleteMultipartUploadResult>\n"
            f"<Location>{location}</Location>\n"
            f"<Bucket>{bucket}</Bucket>\n"
            f"<Key>{key}</Key>\n"
            f'<ETag>"3858f62230ac3c915f300c664312c11f-9"</ETag>\n'
            f"</CompleteMultipartUploadResult>\n"
        )

        self.send_response(200)
        self.send_header("Content-Type", "text/xml")
        self.send_header("Content-Length", len(data))
        self.end_headers()

        self.wfile.write(bytes(data, "UTF-8"))

    def _mock_settings(self):
        parts = urllib.parse.urlsplit(self.path)
        path = [x for x in parts.path.split("/") if x]
        assert path[0] == "mock_settings", path
        if len(path) < 2:
            return self.write_error(400, "_mock_settings: wrong command")

        if path[1] == "at_part_upload":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.at_part_upload = _ServerRuntime.CountAfter.from_cgi_params(
                _runtime.lock, params
            )
            self.log_message("set at_part_upload %s", _runtime.at_part_upload)
            return self._ok()

        if path[1] == "at_object_upload":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.at_object_upload = _ServerRuntime.CountAfter.from_cgi_params(
                _runtime.lock, params
            )
            self.log_message("set at_object_upload %s", _runtime.at_object_upload)
            return self._ok()

        if path[1] == "fake_puts":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.fake_put_when_length_bigger = int(
                params.get("when_length_bigger", [1024 * 1024])[0]
            )
            self.log_message("set fake_puts %s", _runtime.fake_put_when_length_bigger)
            return self._ok()

        if path[1] == "slow_put":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.slow_put = _ServerRuntime.SlowPut(
                lock=_runtime.lock,
                minimal_length_=_and_then(params.get("minimal_length", [None])[0], int),
                probability_=_and_then(params.get("probability", [None])[0], float),
                timeout_=_and_then(params.get("timeout", [None])[0], float),
                count_=_and_then(params.get("count", [None])[0], int),
            )
            self.log_message("set slow put %s", _runtime.slow_put)
            return self._ok()

        if path[1] == "setup_fake_multpartuploads":
            _runtime.fake_multipart_upload = True
            self.log_message("set setup_fake_multpartuploads")
            return self._ok()

        if path[1] == "at_create_multi_part_upload":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.at_create_multi_part_upload = (
                _ServerRuntime.CountAfter.from_cgi_params(_runtime.lock, params)
            )
            self.log_message(
                "set at_create_multi_part_upload %s",
                _runtime.at_create_multi_part_upload,
            )
            return self._ok()

        if path[1] == "reset":
            _runtime.reset()
            self.log_message("reset")
            return self._ok()

        return self.write_error(400, "_mock_settings: wrong command")

    def do_GET(self):
        if self.path == "/":
            return self._ping()

        if self.path.startswith("/mock_settings"):
            return self._mock_settings()

        self.log_message("get redirect")
        return self.redirect()

    def do_PUT(self):
        content_length = int(self.headers.get("Content-Length", 0))

        if _runtime.slow_put is not None:
            timeout = _runtime.slow_put.get_timeout(content_length)
            if timeout is not None:
                self.log_message("slow put %s", timeout)
                time.sleep(timeout)

        parts = urllib.parse.urlsplit(self.path)
        params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
        upload_id = params.get("uploadId", [None])[0]

        if upload_id is not None:
            if _runtime.at_part_upload is not None:
                self.log_message(
                    "put at_part_upload %s, %s, %s",
                    _runtime.at_part_upload,
                    upload_id,
                    parts,
                )

                if _runtime.at_part_upload.has_effect():
                    return _runtime.at_part_upload.inject_error(self)
            if _runtime.fake_multipart_upload:
                if _runtime.is_fake_upload(upload_id, parts.path):
                    return self._fake_put_ok()
        else:
            if _runtime.at_object_upload is not None:
                if _runtime.at_object_upload.has_effect():
                    self.log_message(
                        "put error_at_object_upload %s, %s",
                        _runtime.at_object_upload,
                        parts,
                    )
                    return _runtime.at_object_upload.inject_error(self)
            if _runtime.fake_put_when_length_bigger is not None:
                if content_length > _runtime.fake_put_when_length_bigger:
                    self.log_message(
                        "put fake_put_when_length_bigger %s, %s, %s",
                        _runtime.fake_put_when_length_bigger,
                        content_length,
                        parts,
                    )
                    return self._fake_put_ok()

        self.log_message(
            "put redirect %s",
            parts,
        )
        return self.redirect()

    def do_POST(self):
        parts = urllib.parse.urlsplit(self.path)
        params = urllib.parse.parse_qs(parts.query, keep_blank_values=True)
        uploads = params.get("uploads", [None])[0]
        if uploads is not None:
            if _runtime.at_create_multi_part_upload is not None:
                if _runtime.at_create_multi_part_upload.has_effect():
                    return _runtime.at_create_multi_part_upload.inject_error(self)

            if _runtime.fake_multipart_upload:
                upload_id = get_random_string(5)
                _runtime.register_fake_upload(upload_id, parts.path)
                return self._fake_uploads(parts.path, upload_id)

        upload_id = params.get("uploadId", [None])[0]
        if _runtime.is_fake_upload(upload_id, parts.path):
            return self._fake_post_ok(parts.path)

        return self.redirect()

    def do_HEAD(self):
        self.redirect()

    def do_DELETE(self):
        self.redirect()


class _ThreadedHTTPServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
    """Handle requests in a separate thread."""

    def set_upstream(self, upstream_host, upstream_port):
        self.upstream_host = upstream_host
        self.upstream_port = upstream_port


if __name__ == "__main__":
    httpd = _ThreadedHTTPServer(("0.0.0.0", int(sys.argv[1])), RequestHandler)
    if len(sys.argv) == 4:
        httpd.set_upstream(sys.argv[2], sys.argv[3])
    else:
        httpd.set_upstream("minio1", 9001)
    httpd.serve_forever()
 | base64 --decode > broken_s3.py'] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 bash -c mkdir -p $(dirname broken_s3.py) && echo import http.server
import logging
import random
import socket
import socketserver
import string
import struct
import sys
import threading
import time
import urllib.parse

INF_COUNT = 100000000


def _and_then(value, func):
    assert callable(func)
    return None if value is None else func(value)


class MockControl:
    def __init__(self, cluster, container, port):
        self._cluster = cluster
        self._container = container
        self._port = port

    def reset(self):
        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            [
                "curl",
                "-s",
                f"http://localhost:{self._port}/mock_settings/reset",
            ],
            nothrow=True,
        )
        assert response == "OK", response

    def setup_action(self, when, count=None, after=None, action=None, action_args=None):
        url = f"http://localhost:{self._port}/mock_settings/{when}?nothing=1"

        if count is not None:
            url += f"&count={count}"

        if after is not None:
            url += f"&after={after}"

        if action is not None:
            url += f"&action={action}"

        if action_args is not None:
            for x in action_args:
                url += f"&action_args={x}"

        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            [
                "curl",
                "-s",
                url,
            ],
            nothrow=True,
        )
        assert response == "OK", response

    def setup_at_object_upload(self, **kwargs):
        self.setup_action("at_object_upload", **kwargs)

    def setup_at_part_upload(self, **kwargs):
        self.setup_action("at_part_upload", **kwargs)

    def setup_at_create_multi_part_upload(self, **kwargs):
        self.setup_action("at_create_multi_part_upload", **kwargs)

    def setup_fake_puts(self, part_length):
        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            [
                "curl",
                "-s",
                f"http://localhost:{self._port}/mock_settings/fake_puts?when_length_bigger={part_length}",
            ],
            nothrow=True,
        )
        assert response == "OK", response

    def setup_fake_multpartuploads(self):
        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            [
                "curl",
                "-s",
                f"http://localhost:{self._port}/mock_settings/setup_fake_multpartuploads?",
            ],
            nothrow=True,
        )
        assert response == "OK", response

    def setup_slow_answers(
        self, minimal_length=0, timeout=None, probability=None, count=None
    ):
        url = (
            f"http://localhost:{self._port}/"
            f"mock_settings/slow_put"
            f"?minimal_length={minimal_length}"
        )

        if timeout is not None:
            url += f"&timeout={timeout}"

        if probability is not None:
            url += f"&probability={probability}"

        if count is not None:
            url += f"&count={count}"

        response = self._cluster.exec_in_container(
            self._cluster.get_container_id(self._container),
            ["curl", "-s", url],
            nothrow=True,
        )
        assert response == "OK", response


class _ServerRuntime:
    class SlowPut:
        def __init__(
            self,
            lock,
            probability_=None,
            timeout_=None,
            minimal_length_=None,
            count_=None,
        ):
            self.lock = lock
            self.probability = probability_ if probability_ is not None else 1
            self.timeout = timeout_ if timeout_ is not None else 0.1
            self.minimal_length = minimal_length_ if minimal_length_ is not None else 0
            self.count = count_ if count_ is not None else INF_COUNT

        def __str__(self):
            return (
                f"probability:{self.probability}"
                f" timeout:{self.timeout}"
                f" minimal_length:{self.minimal_length}"
                f" count:{self.count}"
            )

        def get_timeout(self, content_length):
            with self.lock:
                if content_length > self.minimal_length:
                    if self.count > 0:
                        if (
                            _runtime.slow_put.probability == 1
                            or random.random() <= _runtime.slow_put.probability
                        ):
                            self.count -= 1
                            return _runtime.slow_put.timeout
            return None

    class Expected500ErrorAction:
        def inject_error(self, request_handler):
            data = (
                '<?xml version="1.0" encoding="UTF-8"?>'
                "<Error>"
                "<Code>ExpectedError</Code>"
                "<Message>mock s3 injected unretryable error</Message>"
                "<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
                "</Error>"
            )
            request_handler.write_error(500, data)

    class SlowDownAction:
        def inject_error(self, request_handler):
            data = (
                '<?xml version="1.0" encoding="UTF-8"?>'
                "<Error>"
                "<Code>SlowDown</Code>"
                "<Message>Slow Down.</Message>"
                "<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
                "</Error>"
            )
            request_handler.write_error(429, data)

    # make sure that Alibaba errors (QpsLimitExceeded, TotalQpsLimitExceededAction) are retriable
    # we patched contrib/aws to achive it: https://github.com/ClickHouse/aws-sdk-cpp/pull/22 https://github.com/ClickHouse/aws-sdk-cpp/pull/23
    # https://www.alibabacloud.com/help/en/oss/support/http-status-code-503
    class QpsLimitExceededAction:
        def inject_error(self, request_handler):
            data = (
                '<?xml version="1.0" encoding="UTF-8"?>'
                "<Error>"
                "<Code>QpsLimitExceeded</Code>"
                "<Message>Please reduce your request rate.</Message>"
                "<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
                "</Error>"
            )
            request_handler.write_error(429, data)

    class TotalQpsLimitExceededAction:
        def inject_error(self, request_handler):
            data = (
                '<?xml version="1.0" encoding="UTF-8"?>'
                "<Error>"
                "<Code>TotalQpsLimitExceeded</Code>"
                "<Message>Please reduce your request rate.</Message>"
                "<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
                "</Error>"
            )
            request_handler.write_error(429, data)

    class RedirectAction:
        def __init__(self, host="localhost", port=1):
            self.dst_host = _and_then(host, str)
            self.dst_port = _and_then(port, int)

        def inject_error(self, request_handler):
            request_handler.redirect(host=self.dst_host, port=self.dst_port)

    class ConnectionResetByPeerAction:
        def __init__(self, with_partial_data=None):
            self.partial_data = ""
            if with_partial_data is not None and with_partial_data == "1":
                self.partial_data = (
                    '<?xml version="1.0" encoding="UTF-8"?>\n'
                    "<InitiateMultipartUploadResult>\n"
                )

        def inject_error(self, request_handler):
            request_handler.read_all_input()

            if self.partial_data:
                request_handler.send_response(200)
                request_handler.send_header("Content-Type", "text/xml")
                request_handler.send_header("Content-Length", 10000)
                request_handler.end_headers()
                request_handler.wfile.write(bytes(self.partial_data, "UTF-8"))

            time.sleep(1)
            request_handler.connection.setsockopt(
                socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 1, 0)
            )
            request_handler.connection.close()

    class BrokenPipeAction:
        def inject_error(self, request_handler):
            # partial read
            self.rfile.read(50)

            time.sleep(1)
            request_handler.connection.setsockopt(
                socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 1, 0)
            )
            request_handler.connection.close()

    class ConnectionRefusedAction(RedirectAction):
        pass

    class CountAfter:
        def __init__(
            self, lock, count_=None, after_=None, action_=None, action_args_=[]
        ):
            self.lock = lock

            self.count = count_ if count_ is not None else INF_COUNT
            self.after = after_ if after_ is not None else 0
            self.action = action_
            self.action_args = action_args_

            if self.action == "connection_refused":
                self.error_handler = _ServerRuntime.ConnectionRefusedAction()
            elif self.action == "connection_reset_by_peer":
                self.error_handler = _ServerRuntime.ConnectionResetByPeerAction(
                    *self.action_args
                )
            elif self.action == "broken_pipe":
                self.error_handler = _ServerRuntime.BrokenPipeAction()
            elif self.action == "redirect_to":
                self.error_handler = _ServerRuntime.RedirectAction(*self.action_args)
            elif self.action == "slow_down":
                self.error_handler = _ServerRuntime.SlowDownAction(*self.action_args)
            elif self.action == "qps_limit_exceeded":
                self.error_handler = _ServerRuntime.QpsLimitExceededAction(
                    *self.action_args
                )
            elif self.action == "total_qps_limit_exceeded":
                self.error_handler = _ServerRuntime.TotalQpsLimitExceededAction(
                    *self.action_args
                )
            else:
                self.error_handler = _ServerRuntime.Expected500ErrorAction()

        @staticmethod
        def from_cgi_params(lock, params):
            return _ServerRuntime.CountAfter(
                lock=lock,
                count_=_and_then(params.get("count", [None])[0], int),
                after_=_and_then(params.get("after", [None])[0], int),
                action_=params.get("action", [None])[0],
                action_args_=params.get("action_args", []),
            )

        def __str__(self):
            return f"count:{self.count} after:{self.after} action:{self.action} action_args:{self.action_args}"

        def has_effect(self):
            with self.lock:
                if self.after:
                    self.after -= 1
                if self.after == 0:
                    if self.count:
                        self.count -= 1
                        return True
                return False

        def inject_error(self, request_handler):
            self.error_handler.inject_error(request_handler)

    def __init__(self):
        self.lock = threading.Lock()
        self.at_part_upload = None
        self.at_object_upload = None
        self.fake_put_when_length_bigger = None
        self.fake_uploads = dict()
        self.slow_put = None
        self.fake_multipart_upload = None
        self.at_create_multi_part_upload = None

    def register_fake_upload(self, upload_id, key):
        with self.lock:
            self.fake_uploads[upload_id] = key

    def is_fake_upload(self, upload_id, key):
        with self.lock:
            if upload_id in self.fake_uploads:
                return self.fake_uploads[upload_id] == key
        return False

    def reset(self):
        with self.lock:
            self.at_part_upload = None
            self.at_object_upload = None
            self.fake_put_when_length_bigger = None
            self.fake_uploads = dict()
            self.slow_put = None
            self.fake_multipart_upload = None
            self.at_create_multi_part_upload = None


_runtime = _ServerRuntime()


def get_random_string(length):
    # choose from all lowercase letter
    letters = string.ascii_lowercase
    result_str = "".join(random.choice(letters) for i in range(length))
    return result_str


class RequestHandler(http.server.BaseHTTPRequestHandler):
    def _ok(self):
        self.send_response(200)
        self.send_header("Content-Type", "text/plain")
        self.end_headers()
        self.wfile.write(b"OK")

    def _ping(self):
        self._ok()

    def read_all_input(self):
        content_length = int(self.headers.get("Content-Length", 0))
        to_read = content_length
        while to_read > 0:
            # read content in order to avoid error on client
            # Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe
            # do it piece by piece in order to avoid big allocation
            size = min(to_read, 1024)
            str(self.rfile.read(size))
            to_read -= size

    def redirect(self, host=None, port=None):
        if host is None and port is None:
            host = self.server.upstream_host
            port = self.server.upstream_port

        self.read_all_input()

        self.send_response(307)
        url = f"http://{host}:{port}{self.path}"
        self.log_message("redirect to %s", url)
        self.send_header("Location", url)
        self.end_headers()
        self.wfile.write(b"Redirected")

    def write_error(self, http_code, data, content_length=None):
        if content_length is None:
            content_length = len(data)
        self.log_message("write_error %s", data)
        self.read_all_input()
        self.send_response(http_code)
        self.send_header("Content-Type", "text/xml")
        self.send_header("Content-Length", str(content_length))
        self.end_headers()
        if data:
            self.wfile.write(bytes(data, "UTF-8"))

    def _fake_put_ok(self):
        self.log_message("fake put")

        self.read_all_input()

        self.send_response(200)
        self.send_header("Content-Type", "text/xml")
        self.send_header("ETag", "b54357faf0632cce46e942fa68356b38")
        self.send_header("Content-Length", 0)
        self.end_headers()

    def _fake_uploads(self, path, upload_id):
        self.read_all_input()

        parts = [x for x in path.split("/") if x]
        bucket = parts[0]
        key = "/".join(parts[1:])
        data = (
            '<?xml version="1.0" encoding="UTF-8"?>\n'
            "<InitiateMultipartUploadResult>\n"
            f"<Bucket>{bucket}</Bucket>"
            f"<Key>{key}</Key>"
            f"<UploadId>{upload_id}</UploadId>"
            "</InitiateMultipartUploadResult>"
        )

        self.send_response(200)
        self.send_header("Content-Type", "text/xml")
        self.send_header("Content-Length", len(data))
        self.end_headers()

        self.wfile.write(bytes(data, "UTF-8"))

    def _fake_post_ok(self, path):
        self.read_all_input()

        parts = [x for x in path.split("/") if x]
        bucket = parts[0]
        key = "/".join(parts[1:])
        location = "http://Example-Bucket.s3.Region.amazonaws.com/" + path
        data = (
            '<?xml version="1.0" encoding="UTF-8"?>\n'
            "<CompleteMultipartUploadResult>\n"
            f"<Location>{location}</Location>\n"
            f"<Bucket>{bucket}</Bucket>\n"
            f"<Key>{key}</Key>\n"
            f'<ETag>"3858f62230ac3c915f300c664312c11f-9"</ETag>\n'
            f"</CompleteMultipartUploadResult>\n"
        )

        self.send_response(200)
        self.send_header("Content-Type", "text/xml")
        self.send_header("Content-Length", len(data))
        self.end_headers()

        self.wfile.write(bytes(data, "UTF-8"))

    def _mock_settings(self):
        parts = urllib.parse.urlsplit(self.path)
        path = [x for x in parts.path.split("/") if x]
        assert path[0] == "mock_settings", path
        if len(path) < 2:
            return self.write_error(400, "_mock_settings: wrong command")

        if path[1] == "at_part_upload":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.at_part_upload = _ServerRuntime.CountAfter.from_cgi_params(
                _runtime.lock, params
            )
            self.log_message("set at_part_upload %s", _runtime.at_part_upload)
            return self._ok()

        if path[1] == "at_object_upload":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.at_object_upload = _ServerRuntime.CountAfter.from_cgi_params(
                _runtime.lock, params
            )
            self.log_message("set at_object_upload %s", _runtime.at_object_upload)
            return self._ok()

        if path[1] == "fake_puts":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.fake_put_when_length_bigger = int(
                params.get("when_length_bigger", [1024 * 1024])[0]
            )
            self.log_message("set fake_puts %s", _runtime.fake_put_when_length_bigger)
            return self._ok()

        if path[1] == "slow_put":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.slow_put = _ServerRuntime.SlowPut(
                lock=_runtime.lock,
                minimal_length_=_and_then(params.get("minimal_length", [None])[0], int),
                probability_=_and_then(params.get("probability", [None])[0], float),
                timeout_=_and_then(params.get("timeout", [None])[0], float),
                count_=_and_then(params.get("count", [None])[0], int),
            )
            self.log_message("set slow put %s", _runtime.slow_put)
            return self._ok()

        if path[1] == "setup_fake_multpartuploads":
            _runtime.fake_multipart_upload = True
            self.log_message("set setup_fake_multpartuploads")
            return self._ok()

        if path[1] == "at_create_multi_part_upload":
            params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
            _runtime.at_create_multi_part_upload = (
                _ServerRuntime.CountAfter.from_cgi_params(_runtime.lock, params)
            )
            self.log_message(
                "set at_create_multi_part_upload %s",
                _runtime.at_create_multi_part_upload,
            )
            return self._ok()

        if path[1] == "reset":
            _runtime.reset()
            self.log_message("reset")
            return self._ok()

        return self.write_error(400, "_mock_settings: wrong command")

    def do_GET(self):
        if self.path == "/":
            return self._ping()

        if self.path.startswith("/mock_settings"):
            return self._mock_settings()

        self.log_message("get redirect")
        return self.redirect()

    def do_PUT(self):
        content_length = int(self.headers.get("Content-Length", 0))

        if _runtime.slow_put is not None:
            timeout = _runtime.slow_put.get_timeout(content_length)
            if timeout is not None:
                self.log_message("slow put %s", timeout)
                time.sleep(timeout)

        parts = urllib.parse.urlsplit(self.path)
        params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
        upload_id = params.get("uploadId", [None])[0]

        if upload_id is not None:
            if _runtime.at_part_upload is not None:
                self.log_message(
                    "put at_part_upload %s, %s, %s",
                    _runtime.at_part_upload,
                    upload_id,
                    parts,
                )

                if _runtime.at_part_upload.has_effect():
                    return _runtime.at_part_upload.inject_error(self)
            if _runtime.fake_multipart_upload:
                if _runtime.is_fake_upload(upload_id, parts.path):
                    return self._fake_put_ok()
        else:
            if _runtime.at_object_upload is not None:
                if _runtime.at_object_upload.has_effect():
                    self.log_message(
                        "put error_at_object_upload %s, %s",
                        _runtime.at_object_upload,
                        parts,
                    )
                    return _runtime.at_object_upload.inject_error(self)
            if _runtime.fake_put_when_length_bigger is not None:
                if content_length > _runtime.fake_put_when_length_bigger:
                    self.log_message(
                        "put fake_put_when_length_bigger %s, %s, %s",
                        _runtime.fake_put_when_length_bigger,
                        content_length,
                        parts,
                    )
                    return self._fake_put_ok()

        self.log_message(
            "put redirect %s",
            parts,
        )
        return self.redirect()

    def do_POST(self):
        parts = urllib.parse.urlsplit(self.path)
        params = urllib.parse.parse_qs(parts.query, keep_blank_values=True)
        uploads = params.get("uploads", [None])[0]
        if uploads is not None:
            if _runtime.at_create_multi_part_upload is not None:
                if _runtime.at_create_multi_part_upload.has_effect():
                    return _runtime.at_create_multi_part_upload.inject_error(self)

            if _runtime.fake_multipart_upload:
                upload_id = get_random_string(5)
                _runtime.register_fake_upload(upload_id, parts.path)
                return self._fake_uploads(parts.path, upload_id)

        upload_id = params.get("uploadId", [None])[0]
        if _runtime.is_fake_upload(upload_id, parts.path):
            return self._fake_post_ok(parts.path)

        return self.redirect()

    def do_HEAD(self):
        self.redirect()

    def do_DELETE(self):
        self.redirect()


class _ThreadedHTTPServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
    """Handle requests in a separate thread."""

    def set_upstream(self, upstream_host, upstream_port):
        self.upstream_host = upstream_host
        self.upstream_port = upstream_port


if __name__ == "__main__":
    httpd = _ThreadedHTTPServer(("0.0.0.0", int(sys.argv[1])), RequestHandler)
    if len(sys.argv) == 4:
        httpd.set_upstream(sys.argv[2], sys.argv[3])
    else:
        httpd.set_upstream("minio1", 9001)
    httpd.serve_forever()
 | base64 --decode > broken_s3.py] run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 detach:True nothrow:False cmd: ['bash', '-c', 'python3 broken_s3.py 8083 >/var/log/resolver/broken_s3.log 2>/var/log/resolver/broken_s3.err.log'] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 bash -c python3 broken_s3.py 8083 >/var/log/resolver/broken_s3.log 2>/var/log/resolver/broken_s3.err.log] run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8083/'] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 curl -s http://localhost:8083/] Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Exitcode:7 Executing query CREATE TABLE test_table_1(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/1', 'node3') ORDER BY tuple(); on node3 Executing query CREATE TABLE test_table_2(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/2', 'node1') ORDER BY tuple(); on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE test_table_2(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/2', 'node2') ORDER BY tuple(); on node2 Stdout:4110 Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Executing query CREATE TABLE test_table_2(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/2', 'node3') ORDER BY tuple(); on node3 Executing query CREATE TABLE test_table_3(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/3', 'node1') ORDER BY tuple(); on node1 run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 detach:False nothrow:True cmd: ['curl', '-s', 'http://localhost:8083/'] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 curl -s http://localhost:8083/] Stdout:OK broken_s3.py answered OK on attempt 2 Mock server broken_s3.py started run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 bash -c pkill clickhouse] run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE test_table_3(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/3', 'node2') ORDER BY tuple(); on node2 Stdout:8 Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Executing query CREATE TABLE test_table_3(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/3', 'node3') ORDER BY tuple(); on node3 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4110 Executing query CREATE TABLE test_table_4(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/4', 'node1') ORDER BY tuple(); on node1 Executing query CREATE TABLE test_table_4(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/4', 'node2') ORDER BY tuple(); on node2 Executing query CREATE TABLE test_table_4(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/4', 'node3') ORDER BY tuple(); on node3 Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query CREATE TABLE test_table_5(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/5', 'node1') ORDER BY tuple(); on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query CREATE TABLE test_table_5(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/5', 'node2') ORDER BY tuple(); on node2 Stdout:4110 Executing query CREATE TABLE test_table_5(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/5', 'node3') ORDER BY tuple(); on node3 Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Executing query CREATE TABLE test_table_6(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/6', 'node1') ORDER BY tuple(); on node1 Executing query CREATE TABLE test_table_6(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/6', 'node2') ORDER BY tuple(); on node2 run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/config.d/storage_conf.xml) && echo PGNsaWNraG91c2U+CgogICAgPHN0b3JhZ2VfY29uZmlndXJhdGlvbj4KICAgICAgICA8ZGlza3M+CiAgICAgICAgICAgIDxzMz4KICAgICAgICAgICAgICAgIDx0eXBlPnMzPC90eXBlPgogICAgICAgICAgICAgICAgPGVuZHBvaW50Pmh0dHA6Ly9taW5pbzE6OTAwMS9yb290L2RhdGEvPC9lbmRwb2ludD4KICAgICAgICAgICAgICAgIDxhY2Nlc3Nfa2V5X2lkPm1pbmlvPC9hY2Nlc3Nfa2V5X2lkPgogICAgICAgICAgICAgICAgPHNlY3JldF9hY2Nlc3Nfa2V5Pm1pbmlvMTIzPC9zZWNyZXRfYWNjZXNzX2tleT4KICAgICAgICAgICAgICAgIDxza2lwX2FjY2Vzc19jaGVjaz50cnVlPC9za2lwX2FjY2Vzc19jaGVjaz4KICAgICAgICAgICAgPC9zMz4KICAgICAgICA8L2Rpc2tzPgogICAgICAgIDxwb2xpY2llcz4KICAgICAgICAgICAgPHMzPgogICAgICAgICAgICAgICAgPHZvbHVtZXM+CiAgICAgICAgICAgICAgICAgICAgPG1haW4+CiAgICAgICAgICAgICAgICAgICAgICAgIDxkaXNrPnMzPC9kaXNrPgogICAgICAgICAgICAgICAgICAgIDwvbWFpbj4KICAgICAgICAgICAgICAgIDwvdm9sdW1lcz4KICAgICAgICAgICAgPC9zMz4KICAgICAgICA8L3BvbGljaWVzPgogICAgPC9zdG9yYWdlX2NvbmZpZ3VyYXRpb24+CgogICAgPG1lcmdlX3RyZWU+CiAgICAgICAgPGFsbG93X3JlbW90ZV9mc196ZXJvX2NvcHlfcmVwbGljYXRpb24+MTwvYWxsb3dfcmVtb3RlX2ZzX3plcm9fY29weV9yZXBsaWNhdGlvbj4KICAgIDwvbWVyZ2VfdHJlZT4KCjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/config.d/storage_conf.xml'] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/config.d/storage_conf.xml) && echo PGNsaWNraG91c2U+CgogICAgPHN0b3JhZ2VfY29uZmlndXJhdGlvbj4KICAgICAgICA8ZGlza3M+CiAgICAgICAgICAgIDxzMz4KICAgICAgICAgICAgICAgIDx0eXBlPnMzPC90eXBlPgogICAgICAgICAgICAgICAgPGVuZHBvaW50Pmh0dHA6Ly9taW5pbzE6OTAwMS9yb290L2RhdGEvPC9lbmRwb2ludD4KICAgICAgICAgICAgICAgIDxhY2Nlc3Nfa2V5X2lkPm1pbmlvPC9hY2Nlc3Nfa2V5X2lkPgogICAgICAgICAgICAgICAgPHNlY3JldF9hY2Nlc3Nfa2V5Pm1pbmlvMTIzPC9zZWNyZXRfYWNjZXNzX2tleT4KICAgICAgICAgICAgICAgIDxza2lwX2FjY2Vzc19jaGVjaz50cnVlPC9za2lwX2FjY2Vzc19jaGVjaz4KICAgICAgICAgICAgPC9zMz4KICAgICAgICA8L2Rpc2tzPgogICAgICAgIDxwb2xpY2llcz4KICAgICAgICAgICAgPHMzPgogICAgICAgICAgICAgICAgPHZvbHVtZXM+CiAgICAgICAgICAgICAgICAgICAgPG1haW4+CiAgICAgICAgICAgICAgICAgICAgICAgIDxkaXNrPnMzPC9kaXNrPgogICAgICAgICAgICAgICAgICAgIDwvbWFpbj4KICAgICAgICAgICAgICAgIDwvdm9sdW1lcz4KICAgICAgICAgICAgPC9zMz4KICAgICAgICA8L3BvbGljaWVzPgogICAgPC9zdG9yYWdlX2NvbmZpZ3VyYXRpb24+CgogICAgPG1lcmdlX3RyZWU+CiAgICAgICAgPGFsbG93X3JlbW90ZV9mc196ZXJvX2NvcHlfcmVwbGljYXRpb24+MTwvYWxsb3dfcmVtb3RlX2ZzX3plcm9fY29weV9yZXBsaWNhdGlvbj4KICAgIDwvbWVyZ2VfdHJlZT4KCjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/config.d/storage_conf.xml] run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/843d6008b0a3f1a3f979026d39161cca7182bd81ac5bbda226f24d5827bd20b2/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/843d6008b0a3f1a3f979026d39161cca7182bd81ac5bbda226f24d5827bd20b2/json HTTP/1.1" 200 586 Executing query CREATE TABLE test_table_6(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/6', 'node3') ORDER BY tuple(); on node3 Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: zoo1 Skipped - Image is already being pulled by zoo3 Stderr: zoo2 Skipped - Image is already being pulled by zoo3 Stderr: node Skipped - Image is already being pulled by zoo3 Stderr: zoo3 Pulling Stderr: zoo3 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper1/log', '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper1/config', '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper1/coordination', '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper2/log', '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper2/config', '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper2/coordination', '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper3/log', '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper3/config', '/ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/keeper3/coordination'] Command:[docker compose --project-name roottestreloadauxiliaryzookeepers-gw0 --env-file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stdout:4110 Stderr: zoo1 Skipped - Image is already being pulled by node1 Stderr: zoo2 Skipped - Image is already being pulled by node1 Stderr: zoo3 Skipped - Image is already being pulled by node1 Stderr: node1 Pulling Stderr: node2 Pulling Stderr: node2 Pulled Stderr: node1 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper1/log', '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper1/config', '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper1/coordination', '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper2/log', '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper2/config', '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper2/coordination', '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper3/log', '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper3/config', '/ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/keeper3/coordination'] Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/.env --project-name roottestrecoverytimemetric-gw1 --file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/.env --project-name roottestrecoverytimemetric-gw1 --file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/docker-compose.yml up -d --no-recreate] Command:[docker compose --project-name roottestreplicatingconstants-gw9 --env-file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/.env --project-name roottestrelativefilepath-gw3 --file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/.env --project-name roottestrelativefilepath-gw3 --file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/docker-compose.yml up -d --no-recreate] Executing query CREATE TABLE test_table_7(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/7', 'node1') ORDER BY tuple(); on node1 Executing query CREATE TABLE test_table_7(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/7', 'node2') ORDER BY tuple(); on node2 Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Stderr:time="2025-04-02T03:20:57Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestreloadauxiliaryzookeepers-gw0_default Creating Stderr: Network roottestreloadauxiliaryzookeepers-gw0_default Created Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Creating Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Creating Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Creating Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Created Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Created Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Created Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Starting Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Starting Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Starting Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Started Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Started Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Started Stderr:time="2025-04-02T03:20:58Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:20:58Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreloadauxiliaryzookeepers-gw0-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.3.3, port:2181, use_ssl:False run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Connecting to 172.16.3.3(172.16.3.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query CREATE TABLE test_table_7(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/7', 'node3') ORDER BY tuple(); on node3 Connecting to 172.16.3.3(172.16.3.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stdout:753 Clickhouse process running. run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:753 Executing query select 20 on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4110 Stderr: Network roottestrecoverytimemetric-gw1_default Creating Stderr: Network roottestrecoverytimemetric-gw1_default Created Stderr: Container roottestrecoverytimemetric-gw1-node-1 Creating Stderr: Container roottestrecoverytimemetric-gw1-node-1 Created Stderr: Container roottestrecoverytimemetric-gw1-node-1 Starting Stderr: Container roottestrecoverytimemetric-gw1-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestrecoverytimemetric-gw1-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestrecoverytimemetric-gw1-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.5.2... http://localhost:None "GET /v1.46/containers/roottestrecoverytimemetric-gw1-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Connecting to 172.16.3.3(172.16.3.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Executing query CREATE TABLE test_table_8(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/8', 'node1') ORDER BY tuple(); on node1 http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Stderr:time="2025-04-02T03:20:57Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestreplicatingconstants-gw9_default Creating Stderr: Network roottestreplicatingconstants-gw9_default Created Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Creating Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Creating Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Creating Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Created Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Created Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Created Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Starting Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Starting Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Starting Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Started Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Started Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Started Stderr:time="2025-04-02T03:20:59Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:20:59Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreplicatingconstants-gw9-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.6.2, port:2181, use_ssl:False Connecting to 172.16.6.2(172.16.6.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stderr: Network roottestrelativefilepath-gw3_default Creating Stderr: Network roottestrelativefilepath-gw3_default Created Stderr: Container roottestrelativefilepath-gw3-node-1 Creating Stderr: Container roottestrelativefilepath-gw3-node-1 Created Stderr: Container roottestrelativefilepath-gw3-node-1 Starting Stderr: Container roottestrelativefilepath-gw3-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestrelativefilepath-gw3-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestrelativefilepath-gw3-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.9.2... http://localhost:None "GET /v1.46/containers/roottestrelativefilepath-gw3-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Connecting to 172.16.6.2(172.16.6.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Executing query CREATE TABLE test_table_8(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/8', 'node2') ORDER BY tuple(); on node2 http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Connecting to 172.16.3.3(172.16.3.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.6.2(172.16.6.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query select 20 on node1 http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Executing query CREATE TABLE test_table_8(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/8', 'node3') ORDER BY tuple(); on node3 http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps -C clickhouse] http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:01 clickhouse run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c pkill clickhouse] http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/exec/630425f37eca0e28ae67580c77ef8fb45a5bf8dc69fabf0e29a1fb35499025d4/json HTTP/1.1" 200 584 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 No clickhouse process running. Start new one. Stdout:8 http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "POST /v1.46/exec/e91a304b17c751b0a4c740169e9f8569b21a909e5ef0eb3cc00283b4bf6e5e35/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/e91a304b17c751b0a4c740169e9f8569b21a909e5ef0eb3cc00283b4bf6e5e35/json HTTP/1.1" 200 586 Executing query CREATE TABLE test_table_9(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/9', 'node1') ORDER BY tuple(); on node1 http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Connecting to 172.16.6.2(172.16.6.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Connecting to 172.16.3.3(172.16.3.3):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Executing query CREATE TABLE test_table_9(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/9', 'node2') ORDER BY tuple(); on node2 http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Connecting to 172.16.6.2(172.16.6.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Stdout:4949 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Stdout:4949 Executing query select 20 on node1 http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Executing query CREATE TABLE test_table_9(a UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated/9', 'node3') ORDER BY tuple(); on node3 http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Executing query INSERT INTO test_table_0 VALUES (1), (2), (3), (4), (5); on node1 http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Executing query select 20 on node1 http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Stdout:8 http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Connecting to 172.16.3.3(172.16.3.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestreloadauxiliaryzookeepers-gw0-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.3.2, port:2181, use_ssl:False Connecting to 172.16.3.2(172.16.3.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() http://localhost:None "GET /v1.46/containers/994b8e4b75d066f91bcb75bd4e1549e88519b20b6e5339e34168b9f1a57bb44d/json HTTP/1.1" 200 None ClickHouse node started Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost run container_id:roottestrelativefilepath-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p user_files'] Command:[docker exec -u root --privileged roottestrelativefilepath-gw3-node-1 bash -c mkdir -p user_files] http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None Executing query INSERT INTO test_table_1 VALUES (1), (2), (3), (4), (5); on node1 run container_id:roottestrelativefilepath-gw3-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'echo "Test\t111.222\nData\t333.444" > user_files/relative_user_file_test'] Command:[docker exec -u root --privileged roottestrelativefilepath-gw3-node-1 bash -c echo "Test 111.222 Data 333.444" > user_files/relative_user_file_test] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestreloadauxiliaryzookeepers-gw0-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.3.4, port:2181, use_ssl:False Connecting to 172.16.3.4(172.16.3.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) http://localhost:None "GET /v1.46/containers/a2482557f6e9454511a474c72faa19f354bea842c69726d3d50aa34aa4a6ac75/json HTTP/1.1" 200 None ClickHouse node started Executing query DROP DATABASE IF EXISTS rdb; CREATE DATABASE rdb ENGINE = Replicated('/test/test_recovery_time_metric', 'shard1', 'replica1') on node Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query select count() from file('relative_user_file_test', 'TSV', 'text String, number Float64') on node Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/.env --project-name roottestreloadauxiliaryzookeepers-gw0 --file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/.env --project-name roottestreloadauxiliaryzookeepers-gw0 --file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate] Executing query INSERT INTO test_table_2 VALUES (1), (2), (3), (4), (5); on node1 Executing query select 20 on node1 Executing query DROP TABLE IF EXISTS rdb.t; CREATE TABLE rdb.t ( `x` UInt32 ) ENGINE = MergeTree ORDER BY x on node Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Executing query select count() from file('../user_files/relative_user_file_test', 'TSV', 'text String, number Float64') on node Executing query INSERT INTO test_table_3 VALUES (1), (2), (3), (4), (5); on node1 run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'rm /var/lib/clickhouse/metadata/rdb/t.sql'] Command:[docker exec roottestrecoverytimemetric-gw1-node-1 bash -c rm /var/lib/clickhouse/metadata/rdb/t.sql] Stdout:8 run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrecoverytimemetric-gw1-node-1 bash -c ps -C clickhouse] Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Running Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Running Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Running Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Creating Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Created Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Starting Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestreloadauxiliaryzookeepers-gw0-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestreloadauxiliaryzookeepers-gw0-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.3.5... http://localhost:None "GET /v1.46/containers/roottestreloadauxiliaryzookeepers-gw0-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Connecting to 172.16.6.2(172.16.6.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:02 clickhouse run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrecoverytimemetric-gw1-node-1 bash -c pkill clickhouse] http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestreplicatingconstants-gw9-zoo2-1/json HTTP/1.1" 200 None run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrecoverytimemetric-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] get_kazoo_client: zoo2, ip:172.16.6.4, port:2181, use_ssl:False Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query INSERT INTO test_table_4 VALUES (1), (2), (3), (4), (5); on node1 Stdout:8 Executing query select 20 on node1 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/roottestreplicatingconstants-gw9-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.6.3, port:2181, use_ssl:False Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) [gw3] PASSED test_relative_filepath/test.py::test_filepath Command:[docker compose --env-file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/.env --project-name roottestrelativefilepath-gw3 --file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/docker-compose.yml stop --timeout 20] Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/.env --project-name roottestreplicatingconstants-gw9 --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/.env --project-name roottestreplicatingconstants-gw9 --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/docker-compose.yml up -d --no-recreate] http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Executing query INSERT INTO test_table_5 VALUES (1), (2), (3), (4), (5); on node1 http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Executing query INSERT INTO test_table_6 VALUES (1), (2), (3), (4), (5); on node1 http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Executing query select 20 on node1 Stdout:8 Stdout:742 http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Running Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Running Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Running Stderr: Container roottestreplicatingconstants-gw9-node2-1 Creating Stderr: Container roottestreplicatingconstants-gw9-node1-1 Creating Stderr: Container roottestreplicatingconstants-gw9-node1-1 Created Stderr: Container roottestreplicatingconstants-gw9-node2-1 Created Stderr: Container roottestreplicatingconstants-gw9-node2-1 Starting Stderr: Container roottestreplicatingconstants-gw9-node1-1 Starting Stderr: Container roottestreplicatingconstants-gw9-node1-1 Started Stderr: Container roottestreplicatingconstants-gw9-node2-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicatingconstants-gw9-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrecoverytimemetric-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/roottestreplicatingconstants-gw9-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.6.6... http://localhost:None "GET /v1.46/containers/roottestreplicatingconstants-gw9-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Executing query INSERT INTO test_table_7 VALUES (1), (2), (3), (4), (5); on node1 http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Stdout:8 http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None Executing query INSERT INTO test_table_8 VALUES (1), (2), (3), (4), (5); on node1 http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Executing query select 20 on node1 Executing query INSERT INTO test_table_9 VALUES (1), (2), (3), (4), (5); on node1 http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/config.d/storage_conf.xml) && echo PGNsaWNraG91c2U+CgogICAgPHN0b3JhZ2VfY29uZmlndXJhdGlvbj4KICAgICAgICA8ZGlza3M+CiAgICAgICAgICAgIDxzMz4KICAgICAgICAgICAgICAgIDx0eXBlPnMzPC90eXBlPgogICAgICAgICAgICAgICAgPGVuZHBvaW50Pmh0dHA6Ly9taW5pbzE6OTAwMS9yb290L2RhdGEvPC9lbmRwb2ludD4KICAgICAgICAgICAgICAgIDxhY2Nlc3Nfa2V5X2lkPm1pbmlvPC9hY2Nlc3Nfa2V5X2lkPgogICAgICAgICAgICAgICAgPHNlY3JldF9hY2Nlc3Nfa2V5Pm1pbmlvMTIzPC9zZWNyZXRfYWNjZXNzX2tleT4KICAgICAgICAgICAgICAgIDxza2lwX2FjY2Vzc19jaGVjaz50cnVlPC9za2lwX2FjY2Vzc19jaGVjaz4KICAgICAgICAgICAgPC9zMz4KICAgICAgICA8L2Rpc2tzPgogICAgICAgIDxwb2xpY2llcz4KICAgICAgICAgICAgPHMzPgogICAgICAgICAgICAgICAgPHZvbHVtZXM+CiAgICAgICAgICAgICAgICAgICAgPG1haW4+CiAgICAgICAgICAgICAgICAgICAgICAgIDxkaXNrPnMzPC9kaXNrPgogICAgICAgICAgICAgICAgICAgIDwvbWFpbj4KICAgICAgICAgICAgICAgIDwvdm9sdW1lcz4KICAgICAgICAgICAgPC9zMz4KICAgICAgICA8L3BvbGljaWVzPgogICAgPC9zdG9yYWdlX2NvbmZpZ3VyYXRpb24+CgogICAgPG1lcmdlX3RyZWU+CiAgICAgICAgPGFsbG93X3JlbW90ZV9mc196ZXJvX2NvcHlfcmVwbGljYXRpb24+MTwvYWxsb3dfcmVtb3RlX2ZzX3plcm9fY29weV9yZXBsaWNhdGlvbj4KICAgIDwvbWVyZ2VfdHJlZT4KCjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/config.d/storage_conf.xml'] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/config.d/storage_conf.xml) && echo PGNsaWNraG91c2U+CgogICAgPHN0b3JhZ2VfY29uZmlndXJhdGlvbj4KICAgICAgICA8ZGlza3M+CiAgICAgICAgICAgIDxzMz4KICAgICAgICAgICAgICAgIDx0eXBlPnMzPC90eXBlPgogICAgICAgICAgICAgICAgPGVuZHBvaW50Pmh0dHA6Ly9taW5pbzE6OTAwMS9yb290L2RhdGEvPC9lbmRwb2ludD4KICAgICAgICAgICAgICAgIDxhY2Nlc3Nfa2V5X2lkPm1pbmlvPC9hY2Nlc3Nfa2V5X2lkPgogICAgICAgICAgICAgICAgPHNlY3JldF9hY2Nlc3Nfa2V5Pm1pbmlvMTIzPC9zZWNyZXRfYWNjZXNzX2tleT4KICAgICAgICAgICAgICAgIDxza2lwX2FjY2Vzc19jaGVjaz50cnVlPC9za2lwX2FjY2Vzc19jaGVjaz4KICAgICAgICAgICAgPC9zMz4KICAgICAgICA8L2Rpc2tzPgogICAgICAgIDxwb2xpY2llcz4KICAgICAgICAgICAgPHMzPgogICAgICAgICAgICAgICAgPHZvbHVtZXM+CiAgICAgICAgICAgICAgICAgICAgPG1haW4+CiAgICAgICAgICAgICAgICAgICAgICAgIDxkaXNrPnMzPC9kaXNrPgogICAgICAgICAgICAgICAgICAgIDwvbWFpbj4KICAgICAgICAgICAgICAgIDwvdm9sdW1lcz4KICAgICAgICAgICAgPC9zMz4KICAgICAgICA8L3BvbGljaWVzPgogICAgPC9zdG9yYWdlX2NvbmZpZ3VyYXRpb24+CgogICAgPG1lcmdlX3RyZWU+CiAgICAgICAgPGFsbG93X3JlbW90ZV9mc196ZXJvX2NvcHlfcmVwbGljYXRpb24+MTwvYWxsb3dfcmVtb3RlX2ZzX3plcm9fY29weV9yZXBsaWNhdGlvbj4KICAgIDwvbWVyZ2VfdHJlZT4KCjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/config.d/storage_conf.xml] http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrecoverytimemetric-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/f1143944ecc1ebc9917ce70f0f3674baf634453ba42c715cef16021b7f698a03/json HTTP/1.1" 200 None ClickHouse node started Executing query CREATE TABLE simple (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/0/simple', 'node') ORDER BY tuple() PARTITION BY date; on node Inserted test data and initialized all tables run container_id:roottestreadonlytable-gw5-node1-1 detach:False nothrow:False cmd: ['ss', '--resolve', '--tcp', '--no-header', 'state', 'ESTABLISHED', '( dport = 2181 or sport = 2181 )'] Command:[docker exec -u root --privileged roottestreadonlytable-gw5-node1-1 ss --resolve --tcp --no-header state ESTABLISHED ( dport = 2181 or sport = 2181 )] No clickhouse process running. Start new one. Stdout:8 http://localhost:None "POST /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-node2-1/exec HTTP/1.1" 201 74 http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "POST /v1.46/exec/6a10e9cb233ebbfcdf17178e0884ab1fb75d9b1b39e24e509828b305853f614b/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/6a10e9cb233ebbfcdf17178e0884ab1fb75d9b1b39e24e509828b305853f614b/json HTTP/1.1" 200 586 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Stdout:0 0 node1:55626 roottestreadonlytable-gw5-zoo3-1.roottestreadonlytable-gw5_default:2181 Stopping zookeeper node: zoo3 Command:[docker compose --project-name roottestreadonlytable-gw5 --env-file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml stop zoo3] http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None Stdout: PID TTY TIME CMD Stdout: 4949 ? 00:00:06 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None Stdout:4949 http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None Executing query INSERT INTO simple VALUES ('2020-08-27', 1) on node http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None Executing query CREATE TABLE simple2 (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/simple', 'node') ORDER BY tuple() PARTITION BY date; on node http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrecoverytimemetric-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/19b0c5d177ccebb60de33b88f7209a8b98513db5b5f67fe55646484f3f16e883/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicatingconstants-gw9-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicatingconstants-gw9-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.6.5... http://localhost:None "GET /v1.46/containers/roottestreplicatingconstants-gw9-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/6d672cefd776907fc1bb287306607d80c3ba5d29d5875d3ad40e23fbaf878d60/json HTTP/1.1" 200 None Stdout:8 run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/6d672cefd776907fc1bb287306607d80c3ba5d29d5875d3ad40e23fbaf878d60/json HTTP/1.1" 200 None Stdout:789 Clickhouse process running. run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestreloadauxiliaryzookeepers-gw0-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'echo \'\n \n \n zoo1\n 2181\n \n \n zoo2\n 2181\n \n \n zoo3\n 2181\n \n 2000\n \n \n \n \n zoo1\n 2181\n \n \n zoo2\n 2181\n \n \n \n\' > /etc/clickhouse-server/conf.d/zookeeper_config.xml'] Command:[docker exec roottestreloadauxiliaryzookeepers-gw0-node-1 bash -c echo ' zoo1 2181 zoo2 2181 zoo3 2181 2000 zoo1 2181 zoo2 2181 ' > /etc/clickhouse-server/conf.d/zookeeper_config.xml] Stdout:789 Executing query select 20 on node2 Executing query SYSTEM RELOAD CONFIG on node http://localhost:None "GET /v1.46/containers/6d672cefd776907fc1bb287306607d80c3ba5d29d5875d3ad40e23fbaf878d60/json HTTP/1.1" 200 None ClickHouse node2 started Executing query SELECT uniqExact(x) FROM (SELECT version() as x from remote('node{1,2}', system.one)) on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4949 Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Command:[docker compose --env-file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/.env --project-name roottestreplicatingconstants-gw9 --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/docker-compose.yml stop --timeout 20] [gw9] PASSED test_replicating_constants/test.py::test_different_versions Executing query select 20 on node2 run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrecoverytimemetric-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrecoverytimemetric-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrecoverytimemetric-gw1-node-1/exec HTTP/1.1" 201 74 Stderr: Container roottestrelativefilepath-gw3-node-1 Stopping Stderr: Container roottestrelativefilepath-gw3-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] http://localhost:None "POST /v1.46/exec/154dec89f8dde6ac1f7387114b844d8731a9d164e311c7f170df85ff33718c09/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/154dec89f8dde6ac1f7387114b844d8731a9d164e311c7f170df85ff33718c09/json HTTP/1.1" 200 586 Command:[docker compose --env-file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/.env --project-name roottestrelativefilepath-gw3 --file /ClickHouse/tests/integration/test_relative_filepath/_instances-0-gw3/node/docker-compose.yml down --volumes] Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4949 Executing query select 20 on node2 http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (0): [] Executing query DROP TABLE IF EXISTS test_all_projection_files_are_dropped SYNC on node1 Stderr: Container roottestrelativefilepath-gw3-node-1 Stopping Stderr: Container roottestrelativefilepath-gw3-node-1 Stopped Stderr: Container roottestrelativefilepath-gw3-node-1 Removing Stderr: Container roottestrelativefilepath-gw3-node-1 Removed Stderr: Network roottestrelativefilepath-gw3_default Removing Stderr: Network roottestrelativefilepath-gw3_default Removed Cleanup called Docker networks for project roottestrelativefilepath-gw3 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrelativefilepath-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrelativefilepath-gw3 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrelativefilepath-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Unstopped containers: {} No running containers for project: roottestrelativefilepath-gw3 Trying to prune unused networks... run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrecoverytimemetric-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} test_render_log_file_name_templates/test.py::test_check_file_names Cluster name: project_name:roottestrenderlogfilenametemplates-gw3. Added instance name:file-names-from-config tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/.env', '--project-name', 'roottestrenderlogfilenametemplates-gw3', '--file', '/ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ clickhouse_start_command: clickhouse server --config-file=/etc/clickhouse-server/{main_config_file} --log-file=/var/log/clickhouse-server/clickhouse-server-%Y-%m.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server-%Y-%m.err.log Cluster name: project_name:roottestrenderlogfilenametemplates-gw3. Added instance name:file-names-from-params tag:8b2301119731 base_cmd:['docker', 'compose', '--env-file', '/ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/.env', '--project-name', 'roottestrenderlogfilenametemplates-gw3', '--file', '/ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/docker-compose.yml', '--file', '/ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/docker-compose.yml'] docker_compose_yml_dir:/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/ Running tests in /ClickHouse/tests/integration/test_render_log_file_name_templates/test.py Cluster start called. is_up=False Executing query CREATE TABLE test_all_projection_files_are_dropped(a UInt32, b UInt32) ENGINE MergeTree() ORDER BY a SETTINGS storage_policy='s3', old_parts_lifetime=0 on node1 Docker networks for project roottestrenderlogfilenametemplates-gw3 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrenderlogfilenametemplates-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Stdout:804 Clickhouse process running. run container_id:roottestrecoverytimemetric-gw1-node-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrecoverytimemetric-gw1-node-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Docker volumes for project roottestrenderlogfilenametemplates-gw3 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestrenderlogfilenametemplates-gw3 are NETWORK ID NAME DRIVER SCOPE Stdout:804 Executing query select 20 on node Docker containers for project roottestrenderlogfilenametemplates-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrenderlogfilenametemplates-gw3 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrenderlogfilenametemplates-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrenderlogfilenametemplates-gw3 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: file-names-from-config Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_render_log_file_name_templates/configs/config-file-template.xml'] to /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/configs/config.d Setup database dir /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/database Setup logs dir /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--"] Setup directory for instance: file-names-from-params Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/configs/config.d Setup database dir /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/database Setup logs dir /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server-%Y-%m.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server-%Y-%m.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/.env --project-name roottestrenderlogfilenametemplates-gw3 --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/docker-compose.yml --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/docker-compose.yml pull] Stdout:4949 Executing query ALTER TABLE test_all_projection_files_are_dropped ADD projection b_order (SELECT a, b ORDER BY b) on node1 Executing query SELECT name FROM system.parts where name = 'all_1_1_2' and table = 'table_for_recompression' on node1 Executing query ALTER TABLE test_all_projection_files_are_dropped MATERIALIZE projection b_order on node1 Executing query select 20 on node http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (2): ['data/wjt/jatzbbnodwzexhpyoekpgbdcbliuh', 'data/yva/fnqtnfacozmqquyneypidvzyamfgz'] Executing query INSERT INTO test_all_projection_files_are_dropped VALUES (1, 105), (5, 101), (3, 103), (4, 102), (2, 104) on node1 Executing query OPTIMIZE TABLE table_for_recompression FINAL on node1 Executing query ALTER TABLE test_all_projection_files_are_dropped DROP PARTITION ID 'all' on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4949 Executing query SELECT default_compression_codec FROM system.parts where name = 'all_1_1_2' on node1 Executing query select 20 on node http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (2): ['data/wjt/jatzbbnodwzexhpyoekpgbdcbliuh', 'data/yva/fnqtnfacozmqquyneypidvzyamfgz'] Executing query DROP TABLE IF EXISTS test_all_projection_files_are_dropped SYNC on node1 [gw6] PASSED test_recompression_ttl/test.py::test_recompression_simple Command:[docker compose --env-file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/.env --project-name roottestrecompressionttl-gw6 --file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node2/docker-compose.yml stop --timeout 20] [gw8] PASSED test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (0): [] Executing query DROP TABLE IF EXISTS test_hardlinks_preserved_when_projection_dropped SYNC on node1 Executing query DROP TABLE IF EXISTS test_hardlinks_preserved_when_projection_dropped SYNC on node2 Executing query select 20 on node Executing query CREATE TABLE test_hardlinks_preserved_when_projection_dropped ( a UInt32, b UInt32, c UInt32, PROJECTION projection_order_by_b ( SELECT a, b ORDER BY b ) ) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_projection', '{instance}') ORDER BY a SETTINGS cleanup_delay_period=1, max_cleanup_delay_period=3 , storage_policy='s3', old_parts_lifetime=0 on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:4949 Executing query SELECT recovery_time FROM system.clusters WHERE cluster = 'rdb' on node Executing query CREATE TABLE test_hardlinks_preserved_when_projection_dropped ( a UInt32, b UInt32, c UInt32, PROJECTION projection_order_by_b ( SELECT a, b ORDER BY b ) ) ENGINE ReplicatedMergeTree('/clickhouse/tables/test_projection', '{instance}') ORDER BY a SETTINGS cleanup_delay_period=1, max_cleanup_delay_period=3 , storage_policy='s3', old_parts_lifetime=10000 on node2 Executing query DROP DATABASE rdb on node Executing query ALTER TABLE simple2 FETCH PARTITION '2020-08-27' FROM 'zookeeper2:/clickhouse/tables/0/simple'; on node http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (2): ['data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh'] Executing query SYSTEM FLUSH LOGS on node1 Executing query ALTER TABLE simple2 ATTACH PARTITION '2020-08-27'; on node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query SELECT id FROM simple2 on node http://localhost:None "GET /v1.46/exec/e91a304b17c751b0a4c740169e9f8569b21a909e5ef0eb3cc00283b4bf6e5e35/json HTTP/1.1" 200 584 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/9578e6f2c513f9fc614ecc3a88aa7e0b79c4933aa4620f758761c42c1e2ab99f/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/9578e6f2c513f9fc614ecc3a88aa7e0b79c4933aa4620f758761c42c1e2ab99f/json HTTP/1.1" 200 586 Stderr: Container roottestreadonlytable-gw5-zoo3-1 Stopping Stderr: Container roottestreadonlytable-gw5-zoo3-1 Stopped run container_id:roottestreloadauxiliaryzookeepers-gw0-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'echo \'\n \n \n zoo2\n 2181\n \n 2000\n \n\' > /etc/clickhouse-server/conf.d/zookeeper_config.xml'] Command:[docker exec roottestreloadauxiliaryzookeepers-gw0-node-1 bash -c echo ' zoo2 2181 2000 ' > /etc/clickhouse-server/conf.d/zookeeper_config.xml] Executing query SYSTEM RELOAD CONFIG on node Executing query SELECT uuid FROM system.tables WHERE name = 'test_hardlinks_preserved_when_projection_dropped' on node1 Executing query INSERT INTO test_hardlinks_preserved_when_projection_dropped VALUES (1, 105, 1), (5, 101, 1), (3, 103, 1), (4, 102, 1), (2, 104, 1) on node1 Stderr: Container roottestreplicatingconstants-gw9-node2-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-node1-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-node2-1 Stopped Stderr: Container roottestreplicatingconstants-gw9-node1-1 Stopped Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Stopped Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Stopped Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/.env --project-name roottestreplicatingconstants-gw9 --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replicating_constants/_instances-0-gw9/node2/docker-compose.yml down --volumes] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:5785 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:5785 Executing query select 20 on node1 Executing query SYSTEM STOP MERGES on node2 Executing query ALTER TABLE test_hardlinks_preserved_when_projection_dropped UPDATE c = 2 where c = 1 on node1 Stderr: Container roottestreplicatingconstants-gw9-node2-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-node1-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-node2-1 Stopped Stderr: Container roottestreplicatingconstants-gw9-node2-1 Removing Stderr: Container roottestreplicatingconstants-gw9-node1-1 Stopped Stderr: Container roottestreplicatingconstants-gw9-node1-1 Removing Stderr: Container roottestreplicatingconstants-gw9-node1-1 Removed Stderr: Container roottestreplicatingconstants-gw9-node2-1 Removed Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Stopping Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Stopped Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Removing Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Stopped Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Removing Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Stopped Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Removing Stderr: Container roottestreplicatingconstants-gw9-zoo2-1 Removed Stderr: Container roottestreplicatingconstants-gw9-zoo3-1 Removed Stderr: Container roottestreplicatingconstants-gw9-zoo1-1 Removed Stderr: Network roottestreplicatingconstants-gw9_default Removing Stderr: Network roottestreplicatingconstants-gw9_default Removed Cleanup called Docker networks for project roottestreplicatingconstants-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicatingconstants-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicatingconstants-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicatingconstants-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreplicatingconstants-gw9 Trying to prune unused networks... Executing query SELECT COUNT() FROM system.replication_queue on node1 Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_replication_without_zookeeper/test.py::test_startup_without_zookeeper Running tests in /ClickHouse/tests/integration/test_replication_without_zookeeper/test.py Cluster start called. is_up=False Docker networks for project roottestreplicationwithoutzookeeper-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicationwithoutzookeeper-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Executing query select 20 on node1 Docker volumes for project roottestreplicationwithoutzookeeper-gw9 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestreplicationwithoutzookeeper-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicationwithoutzookeeper-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicationwithoutzookeeper-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicationwithoutzookeeper-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreplicationwithoutzookeeper-gw9 Trying to prune unused networks... Executing query SYSTEM START MERGES on node2 Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_replication_without_zookeeper/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/database Setup logs dir /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/.env --project-name roottestreplicationwithoutzookeeper-gw9 --file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml pull] Executing query SELECT removal_state FROM system.parts WHERE name = 'all_0_0_0' AND table = 'test_hardlinks_preserved_when_projection_dropped' AND not active on node1 Executing query select 20 on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 5785 ? 00:00:03 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:5785 Executing query SELECT removal_state FROM system.parts WHERE name = 'all_0_0_0' AND table = 'test_hardlinks_preserved_when_projection_dropped' AND not active on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:5785 Command:[docker compose --env-file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/.env --project-name roottestrecoverytimemetric-gw1 --file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/docker-compose.yml stop --timeout 20] [gw1] PASSED test_recovery_time_metric/test.py::test_recovery_time_metric Executing query SELECT removal_state FROM system.parts WHERE name = 'all_0_0_0' AND table = 'test_hardlinks_preserved_when_projection_dropped' AND not active on node1 Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Container roottestrecoverytimemetric-gw1-node-1 Stopping Stderr: Container roottestrecoverytimemetric-gw1-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Stdout:5785 Command:[docker compose --env-file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/.env --project-name roottestrecoverytimemetric-gw1 --file /ClickHouse/tests/integration/test_recovery_time_metric/_instances-0-gw1/node/docker-compose.yml down --volumes] Executing query INSERT INTO test_table_0 VALUES (6), (7), (8), (9), (10); on node1 Executing query SELECT removal_state FROM system.parts WHERE name = 'all_0_0_0' AND table = 'test_hardlinks_preserved_when_projection_dropped' AND not active on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query ALTER TABLE simple2 FETCH PARTITION '2020-08-27' FROM 'zookeeper2:/clickhouse/tables/0/simple'; on node Stdout:5785 Executing query INSERT INTO test_table_1 VALUES (6), (7), (8), (9), (10); on node1 Executing query SELECT id FROM simple2 on node Stderr: Container roottestrecoverytimemetric-gw1-node-1 Stopping Stderr: Container roottestrecoverytimemetric-gw1-node-1 Stopped Stderr: Container roottestrecoverytimemetric-gw1-node-1 Removing Stderr: Container roottestrecoverytimemetric-gw1-node-1 Removed Stderr: Network roottestrecoverytimemetric-gw1_default Removing Stderr: Network roottestrecoverytimemetric-gw1_default Removed Cleanup called Docker networks for project roottestrecoverytimemetric-gw1 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrecoverytimemetric-gw1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrecoverytimemetric-gw1 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrecoverytimemetric-gw1-.*-1$' --format '{{.ID}}:{{.Names}}'] Executing query INSERT INTO test_table_2 VALUES (6), (7), (8), (9), (10); on node1 Unstopped containers: {} No running containers for project: roottestrecoverytimemetric-gw1 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/.env --project-name roottestreloadauxiliaryzookeepers-gw0 --file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml stop --timeout 20] [gw0] PASSED test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers Executing query INSERT INTO test_table_3 VALUES (6), (7), (8), (9), (10); on node1 Executing query INSERT INTO test_table_4 VALUES (6), (7), (8), (9), (10); on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: file-names-from-params Skipped - Image is already being pulled by file-names-from-config Stderr: file-names-from-config Pulling Stderr: file-names-from-config Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/.env --project-name roottestrenderlogfilenametemplates-gw3 --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/docker-compose.yml --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/.env --project-name roottestrenderlogfilenametemplates-gw3 --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/docker-compose.yml --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/docker-compose.yml up -d --no-recreate] Stdout:5785 Stderr: zoo1 Skipped - Image is already being pulled by node1 Stderr: zoo2 Skipped - Image is already being pulled by node1 Stderr: zoo3 Skipped - Image is already being pulled by node1 Stderr: node1 Pulling Stderr: node1 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper1/log', '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper1/config', '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper1/coordination', '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper2/log', '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper2/config', '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper2/coordination', '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper3/log', '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper3/config', '/ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/keeper3/coordination'] Command:[docker compose --project-name roottestreplicationwithoutzookeeper-gw9 --env-file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Executing query INSERT INTO test_table_5 VALUES (6), (7), (8), (9), (10); on node1 Executing query SELECT removal_state FROM system.parts WHERE name = 'all_0_0_0' AND table = 'test_hardlinks_preserved_when_projection_dropped' AND not active on node1 Stderr: Container roottestrecompressionttl-gw6-node2-1 Stopping Stderr: Container roottestrecompressionttl-gw6-node1-1 Stopping Stderr: Container roottestrecompressionttl-gw6-node1-1 Stopped Stderr: Container roottestrecompressionttl-gw6-node2-1 Stopped Stderr: Container roottestrecompressionttl-gw6-zoo3-1 Stopping Stderr: Container roottestrecompressionttl-gw6-zoo1-1 Stopping Stderr: Container roottestrecompressionttl-gw6-zoo2-1 Stopping Stderr: Container roottestrecompressionttl-gw6-zoo2-1 Stopped Stderr: Container roottestrecompressionttl-gw6-zoo1-1 Stopped Stderr: Container roottestrecompressionttl-gw6-zoo3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/.env --project-name roottestrecompressionttl-gw6 --file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_recompression_ttl/_instances-0-gw6/node2/docker-compose.yml down --volumes] Executing query INSERT INTO test_table_6 VALUES (6), (7), (8), (9), (10); on node1 Executing query INSERT INTO test_table_7 VALUES (6), (7), (8), (9), (10); on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:5785 Stderr: Network roottestrenderlogfilenametemplates-gw3_default Creating Stderr: Network roottestrenderlogfilenametemplates-gw3_default Created Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Creating Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Creating Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Created Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Created Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Starting Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Starting Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Started Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Started ClickHouse instance created get_instance_ip instance_name=file-names-from-config http://localhost:None "GET /v1.46/containers/roottestrenderlogfilenametemplates-gw3-file-names-from-config-1/json HTTP/1.1" 200 None get_instance_ip instance_name=file-names-from-config http://localhost:None "GET /v1.46/containers/roottestrenderlogfilenametemplates-gw3-file-names-from-config-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in file-names-from-config, ip: 172.16.5.3... http://localhost:None "GET /v1.46/containers/roottestrenderlogfilenametemplates-gw3-file-names-from-config-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None Executing query INSERT INTO test_table_8 VALUES (6), (7), (8), (9), (10); on node1 http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None Stderr: Container roottestrecompressionttl-gw6-node1-1 Stopping Stderr: Container roottestrecompressionttl-gw6-node2-1 Stopping Stderr: Container roottestrecompressionttl-gw6-node1-1 Stopped Stderr: Container roottestrecompressionttl-gw6-node1-1 Removing Stderr: Container roottestrecompressionttl-gw6-node2-1 Stopped Stderr: Container roottestrecompressionttl-gw6-node2-1 Removing Stderr: Container roottestrecompressionttl-gw6-node2-1 Removed Stderr: Container roottestrecompressionttl-gw6-node1-1 Removed Stderr: Container roottestrecompressionttl-gw6-zoo1-1 Stopping Stderr: Container roottestrecompressionttl-gw6-zoo2-1 Stopping Stderr: Container roottestrecompressionttl-gw6-zoo3-1 Stopping Stderr: Container roottestrecompressionttl-gw6-zoo3-1 Stopped Stderr: Container roottestrecompressionttl-gw6-zoo3-1 Removing Stderr: Container roottestrecompressionttl-gw6-zoo2-1 Stopped Stderr: Container roottestrecompressionttl-gw6-zoo2-1 Removing Stderr: Container roottestrecompressionttl-gw6-zoo1-1 Stopped Stderr: Container roottestrecompressionttl-gw6-zoo1-1 Removing Stderr: Container roottestrecompressionttl-gw6-zoo1-1 Removed Stderr: Container roottestrecompressionttl-gw6-zoo3-1 Removed Stderr: Container roottestrecompressionttl-gw6-zoo2-1 Removed Stderr: Network roottestrecompressionttl-gw6_default Removing Stderr: Network roottestrecompressionttl-gw6_default Removed Cleanup called Docker networks for project roottestrecompressionttl-gw6 are NETWORK ID NAME DRIVER SCOPE http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None Docker containers for project roottestrecompressionttl-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrecompressionttl-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrecompressionttl-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrecompressionttl-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stderr:time="2025-04-02T03:21:20Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestreplicationwithoutzookeeper-gw9_default Creating Stderr: Network roottestreplicationwithoutzookeeper-gw9_default Created Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Creating Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Creating Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Creating Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Created Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Created Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Created Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Starting Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Starting Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Starting Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Started Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Started Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Started Stderr:time="2025-04-02T03:21:21Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:21:21Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreplicationwithoutzookeeper-gw9-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.6.4, port:2181, use_ssl:False Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Stdout:3 Command:[docker volume prune -f] http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_replica_can_become_leader/test.py::test_can_become_leader Running tests in /ClickHouse/tests/integration/test_replica_can_become_leader/test.py Cluster start called. is_up=False Docker networks for project roottestreplicacanbecomeleader-gw6 are NETWORK ID NAME DRIVER SCOPE Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Docker containers for project roottestreplicacanbecomeleader-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicacanbecomeleader-gw6 are DRIVER VOLUME NAME Cleanup called Executing query INSERT INTO test_table_9 VALUES (6), (7), (8), (9), (10); on node1 Docker networks for project roottestreplicacanbecomeleader-gw6 are NETWORK ID NAME DRIVER SCOPE http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None Docker containers for project roottestreplicacanbecomeleader-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicacanbecomeleader-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicacanbecomeleader-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreplicacanbecomeleader-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Executing query SELECT removal_state FROM system.parts WHERE name = 'all_0_0_0' AND table = 'test_hardlinks_preserved_when_projection_dropped' AND not active on node1 Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_replica_can_become_leader/configs/notleader.xml'] to /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/database Setup logs dir /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node2 Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Connection dropped: socket connection error: Connection refused Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_replica_can_become_leader/configs/notleaderignorecase.xml'] to /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/database Setup logs dir /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node3 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/database Setup logs dir /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/.env --project-name roottestreplicacanbecomeleader-gw6 --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/docker-compose.yml pull] Starting zookeeper node: zoo3 Command:[docker compose --project-name roottestreadonlytable-gw5 --env-file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml start zoo3] http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None Executing query SELECT value FROM system.zookeeper WHERE path like '/clickhouse/zero_copy/zero_copy_s3/5cf267b1-3f5a-4518-9737-aca661b3e9f5' AND name = 'all_0_0_0' on node1 http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None Stderr: Container roottestreadonlytable-gw5-zoo3-1 Starting Stderr: Container roottestreadonlytable-gw5-zoo3-1 Started [gw5] PASSED test_read_only_table/test.py::test_restart_zookeeper Command:[docker compose --env-file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/.env --project-name roottestreadonlytable-gw5 --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/docker-compose.yml stop --timeout 20] http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query SELECT path FROM system.parts WHERE name = 'all_0_0_0' AND table = 'test_hardlinks_preserved_when_projection_dropped' on node2 http://localhost:None "GET /v1.46/exec/9578e6f2c513f9fc614ecc3a88aa7e0b79c4933aa4620f758761c42c1e2ab99f/json HTTP/1.1" 200 584 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/ee32b548b740101d39b93102c926700128ae18a53368ab48d5fe78429369562e/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/ee32b548b740101d39b93102c926700128ae18a53368ab48d5fe78429369562e/json HTTP/1.1" 200 586 http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', 'INDEX_FILE=/var/lib/clickhouse/disks/s3/store/4e7/4e778c2a-4775-4f2d-84be-31d9d9cc6baa/all_0_0_0//primary.cidx\n cp $INDEX_FILE $INDEX_FILE.backup\n echo "unexpected data in metadata file" | cat > $INDEX_FILE\n '] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c INDEX_FILE=/var/lib/clickhouse/disks/s3/store/4e7/4e778c2a-4775-4f2d-84be-31d9d9cc6baa/all_0_0_0//primary.cidx cp $INDEX_FILE $INDEX_FILE.backup echo "unexpected data in metadata file" | cat > $INDEX_FILE ] http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 789 ? 00:00:04 clickhouse run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c pkill clickhouse] run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/dec955efcbab81089f53975dc3a59794e5321ef0c2aada4c164b669fed407ae7/json HTTP/1.1" 200 None ClickHouse file-names-from-config started get_instance_ip instance_name=file-names-from-params http://localhost:None "GET /v1.46/containers/roottestrenderlogfilenametemplates-gw3-file-names-from-params-1/json HTTP/1.1" 200 None get_instance_ip instance_name=file-names-from-params http://localhost:None "GET /v1.46/containers/roottestrenderlogfilenametemplates-gw3-file-names-from-params-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in file-names-from-params, ip: 172.16.5.2... http://localhost:None "GET /v1.46/containers/roottestrenderlogfilenametemplates-gw3-file-names-from-params-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/0324da5940c6973e97f6206ee0768d4fb8039ea886143032918c8160a86bbf57/json HTTP/1.1" 200 None ClickHouse file-names-from-params started log_file /var/log/clickhouse-server/clickhouse-server-2025-04.log err_log_file /var/log/clickhouse-server/clickhouse-server-2025-04.err.log run container_id:roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 detach:False nothrow:True cmd: ['bash', '-c', 'ls -lh /var/log/clickhouse-server/'] Command:[docker exec roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 bash -c ls -lh /var/log/clickhouse-server/] Stdout:789 Stdout:total 56K Stdout:-rw-r----- 1 root root 749 Apr 2 03:21 clickhouse-server-2025-04.err.log Stdout:-rw-r----- 1 root root 45K Apr 2 03:21 clickhouse-server-2025-04.log Stdout:-rw------- 1 root root 152 Apr 2 03:21 stderr.log Stdout:-rw-r----- 1 root root 0 Apr 2 03:21 stdout.log check instance 'file-names-from-config': /var/log/clickhouse-server/ contains: total 56K -rw-r----- 1 root root 749 Apr 2 03:21 clickhouse-server-2025-04.err.log -rw-r----- 1 root root 45K Apr 2 03:21 clickhouse-server-2025-04.log -rw------- 1 root root 152 Apr 2 03:21 stderr.log -rw-r----- 1 root root 0 Apr 2 03:21 stdout.log run container_id:roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 detach:False nothrow:True cmd: ['bash', '-c', 'ls /var/log/clickhouse-server/clickhouse-server-2025-04.log'] Command:[docker exec roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 bash -c ls /var/log/clickhouse-server/clickhouse-server-2025-04.log] Stdout:/var/log/clickhouse-server/clickhouse-server-2025-04.log run container_id:roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 detach:False nothrow:True cmd: ['bash', '-c', 'ls /var/log/clickhouse-server/clickhouse-server-2025-04.err.log'] Command:[docker exec roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 bash -c ls /var/log/clickhouse-server/clickhouse-server-2025-04.err.log] Stdout:/var/log/clickhouse-server/clickhouse-server-2025-04.err.log run container_id:roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 detach:False nothrow:True cmd: ['bash', '-c', 'ls -lh /var/log/clickhouse-server/'] Command:[docker exec roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 bash -c ls -lh /var/log/clickhouse-server/] Stdout:total 56K Stdout:-rw-r----- 1 root root 749 Apr 2 03:21 clickhouse-server-2025-04.err.log Stdout:-rw-r----- 1 root root 45K Apr 2 03:21 clickhouse-server-2025-04.log Stdout:-rw------- 1 root root 152 Apr 2 03:21 stderr.log Stdout:-rw-r----- 1 root root 0 Apr 2 03:21 stdout.log check instance 'file-names-from-params': /var/log/clickhouse-server/ contains: total 56K -rw-r----- 1 root root 749 Apr 2 03:21 clickhouse-server-2025-04.err.log -rw-r----- 1 root root 45K Apr 2 03:21 clickhouse-server-2025-04.log -rw------- 1 root root 152 Apr 2 03:21 stderr.log -rw-r----- 1 root root 0 Apr 2 03:21 stdout.log run container_id:roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 detach:False nothrow:True cmd: ['bash', '-c', 'ls /var/log/clickhouse-server/clickhouse-server-2025-04.log'] Command:[docker exec roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 bash -c ls /var/log/clickhouse-server/clickhouse-server-2025-04.log] Stdout:/var/log/clickhouse-server/clickhouse-server-2025-04.log run container_id:roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 detach:False nothrow:True cmd: ['bash', '-c', 'ls /var/log/clickhouse-server/clickhouse-server-2025-04.err.log'] Command:[docker exec roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 bash -c ls /var/log/clickhouse-server/clickhouse-server-2025-04.err.log] Stdout:/var/log/clickhouse-server/clickhouse-server-2025-04.err.log Command:[docker compose --env-file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/.env --project-name roottestrenderlogfilenametemplates-gw3 --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/docker-compose.yml --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/docker-compose.yml stop --timeout 20] [gw3] PASSED test_render_log_file_name_templates/test.py::test_check_file_names Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:6624 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:6624 Executing query select 20 on node1 run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:789 Executing query select 20 on node1 Executing query system refresh view re.a0 on node1 Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:789 Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Stopping Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Stopped Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Stopping Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Stopping Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Stopping Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Stopped Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Stopped Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/.env --project-name roottestreloadauxiliaryzookeepers-gw0 --file /ClickHouse/tests/integration/test_reload_auxiliary_zookeepers/_instances-0-gw0/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml down --volumes] Executing query system refresh view re.a1 on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 6624 ? 00:00:04 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:6624 Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Stopping Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Stopped Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Removing Stderr: Container roottestreloadauxiliaryzookeepers-gw0-node-1 Removed Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Stopping Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Stopping Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Stopping Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Stopped Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Removing Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Stopped Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Removing Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Stopped Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Removing Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo2-1 Removed Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo3-1 Removed Stderr: Container roottestreloadauxiliaryzookeepers-gw0-zoo1-1 Removed Stderr: Network roottestreloadauxiliaryzookeepers-gw0_default Removing Stderr: Network roottestreloadauxiliaryzookeepers-gw0_default Removed Cleanup called Docker networks for project roottestreloadauxiliaryzookeepers-gw0 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreloadauxiliaryzookeepers-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreloadauxiliaryzookeepers-gw0 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreloadauxiliaryzookeepers-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreloadauxiliaryzookeepers-gw0 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestreplicatedzerocopyprojectionmutation-gw8-node2-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/58e58894a10baf2cfb1ce4078509fc842805b34f1444b1e8bb8dd434a66bcdd9/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/58e58894a10baf2cfb1ce4078509fc842805b34f1444b1e8bb8dd434a66bcdd9/json HTTP/1.1" 200 586 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:6624 run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Stopping Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Stopping Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Stopped Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/.env --project-name roottestrenderlogfilenametemplates-gw3 --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-config/docker-compose.yml --file /ClickHouse/tests/integration/test_render_log_file_name_templates/_instances-0-gw3/file-names-from-params/docker-compose.yml down --volumes] Stdout:1578 Clickhouse process running. run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:1578 Executing query select 20 on node2 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:6624 Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Stopping Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Stopping Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Stopped Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Removing Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Stopped Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Removing Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-config-1 Removed Stderr: Container roottestrenderlogfilenametemplates-gw3-file-names-from-params-1 Removed Stderr: Network roottestrenderlogfilenametemplates-gw3_default Removing Stderr: Network roottestrenderlogfilenametemplates-gw3_default Removed Cleanup called Docker networks for project roottestrenderlogfilenametemplates-gw3 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrenderlogfilenametemplates-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrenderlogfilenametemplates-gw3 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrenderlogfilenametemplates-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrenderlogfilenametemplates-gw3 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Executing query select 20 on node2 Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Executing query SYSTEM WAIT LOADING PARTS test_hardlinks_preserved_when_projection_dropped on node2 Executing query SYSTEM FLUSH LOGS on node2 Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestreplicationwithoutzookeeper-gw9-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.6.3, port:2181, use_ssl:False Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestreplicationwithoutzookeeper-gw9-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.6.2, port:2181, use_ssl:False Connecting to 172.16.6.2(172.16.6.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:6624 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/.env --project-name roottestreplicationwithoutzookeeper-gw9 --file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/.env --project-name roottestreplicationwithoutzookeeper-gw9 --file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml up -d --no-recreate] Executing query SELECT name, reason, path FROM system.detached_parts WHERE table = 'test_hardlinks_preserved_when_projection_dropped' on node2 Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Running Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Running Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Running Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Creating Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Created Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Starting Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicationwithoutzookeeper-gw9-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicationwithoutzookeeper-gw9-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.6.5... http://localhost:None "GET /v1.46/containers/roottestreplicationwithoutzookeeper-gw9-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None run container_id:roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 detach:False nothrow:False cmd: ['bash', '-c', 'INDEX_FILE=/var/lib/clickhouse/disks/s3/store/4e7/4e778c2a-4775-4f2d-84be-31d9d9cc6baa/detached/broken-on-start_all_0_0_0/primary.cidx\n mv $INDEX_FILE.backup $INDEX_FILE\n '] Command:[docker exec roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 bash -c INDEX_FILE=/var/lib/clickhouse/disks/s3/store/4e7/4e778c2a-4775-4f2d-84be-31d9d9cc6baa/detached/broken-on-start_all_0_0_0/primary.cidx mv $INDEX_FILE.backup $INDEX_FILE ] http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None Executing query ALTER TABLE test_hardlinks_preserved_when_projection_dropped DROP DETACHED PART 'broken-on-start_all_0_0_0' on node2 http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None Executing query CHECK TABLE test_hardlinks_preserved_when_projection_dropped on node1 http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None Stdout:6624 http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None Executing query CHECK TABLE test_hardlinks_preserved_when_projection_dropped on node2 http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None Executing query ALTER TABLE test_hardlinks_preserved_when_projection_dropped DROP PART 'all_0_0_0_1' on node2 http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (36): ['data/ber/gkqbpmzqdahflzrjaabxupjlvjwtg', 'data/bon/askanidyxefnvndsspxclxcuvbvdh', 'data/cjk/vvwegjkfcjbctrilqpbdylrzydydz', 'data/dro/zmmtayctfaxyvsncyobbzutfeqnna', 'data/dtr/mhgavpdxunqdplrguhckhuudytkyl', 'data/ebi/wvspvqaoqtmmdxpqlzdumcklonlkz', 'data/emm/lkhrrrjjtdrjlszqxehjtkxjwilos', 'data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/fxr/vaifwixwcndxchmtqmvitsfkmkqem', 'data/fyh/emamaltdihtmwmxqtpuipzmplsgid', 'data/hed/hxepiztdvesbxpytutndweqtqgwlu', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh', 'data/imo/tjrppowoiqkdvujftnywhdvqgbxdd', 'data/iqv/bfbqkwrwwooiinvyqtfhrsanarzri', 'data/irm/plomxvhrullvorgtzjymbmfutqopo', 'data/mdw/ryellrwbqyazlkedermhkhargdkfc', 'data/mgu/cltnnmtcdsncbbzcsaicjydqdjmye', 'data/mlp/tjoyvcswwnfzmiqxtltfczjlvacbo', 'data/osa/hjtpjfldimrtfizyxqlgjfxwaqeqv', 'data/oyh/vzkufrhcgqazlffqdcffvrrlfhdvg', 'data/pel/ccutxxxyhekhcxkaykexajhbddgka', 'data/pun/aqpjahaaabehgygjjecmgykvukgya', 'data/rhq/idguavenoqmoythotgpwnyjtyrseo', 'data/sqz/ohrmuhfjdwtogdkovxuzaxuhvnezm', 'data/svq/vcgvsgbmcxoljrfrklpaazqpznqwi', 'data/trh/xulhbcgwrfxoodptwfqmxkhzbzkwb', 'data/ulg/vbpwoauobhomiwpzbpivgzmtxjyeg', 'data/uui/drumxvqrbfjlaolacmdwoeuiorpgl', 'data/uxe/ykxoporszvtptpvpetbrpufjzcjyr', 'data/vvg/usamesxqlgsfxplmokhzjkfncozzo', 'data/vxa/asrjpbhfwrnpjnwbmpefsduuqlfnt', 'data/wcz/iqmiowyiyjykjqqybzyyamjgtgott', 'data/wek/sjlmflyzymfscejvucctykfybyjos', 'data/wfz/qumutxzizaadrrtukiqodhtpjnbba', 'data/wwe/wlqfsshomuspexhgpeegvddvpbnqw', 'data/ylj/qssjojykonduchrgmvbtgxiyjiwfh'] http://localhost:None "GET /v1.46/containers/e214ae707b9d6579842f2b9cc419f4717bba4dd3955adb308a0f6f43e8aba572/json HTTP/1.1" 200 None ClickHouse node1 started Executing query CREATE DATABASE test; CREATE TABLE test_table(date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated', 'node1') ORDER BY id PARTITION BY toYYYYMM(date); on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query INSERT INTO test_table VALUES ('2018-10-01', 1), ('2018-10-02', 2), ('2018-10-03', 3) on node1 Stdout:6624 Executing query SELECT COUNT(*) from test_table on node1 Executing query SELECT is_readonly from system.replicas where table='test_table' on node1 http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (18): ['data/ber/gkqbpmzqdahflzrjaabxupjlvjwtg', 'data/ebi/wvspvqaoqtmmdxpqlzdumcklonlkz', 'data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/fyh/emamaltdihtmwmxqtpuipzmplsgid', 'data/hed/hxepiztdvesbxpytutndweqtqgwlu', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh', 'data/mgu/cltnnmtcdsncbbzcsaicjydqdjmye', 'data/osa/hjtpjfldimrtfizyxqlgjfxwaqeqv', 'data/pel/ccutxxxyhekhcxkaykexajhbddgka', 'data/pun/aqpjahaaabehgygjjecmgykvukgya', 'data/rhq/idguavenoqmoythotgpwnyjtyrseo', 'data/sqz/ohrmuhfjdwtogdkovxuzaxuhvnezm', 'data/svq/vcgvsgbmcxoljrfrklpaazqpznqwi', 'data/uxe/ykxoporszvtptpvpetbrpufjzcjyr', 'data/vxa/asrjpbhfwrnpjnwbmpefsduuqlfnt', 'data/wcz/iqmiowyiyjykjqqybzyyamjgtgott', 'data/wek/sjlmflyzymfscejvucctykfybyjos', 'data/wwe/wlqfsshomuspexhgpeegvddvpbnqw'] get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreplicationwithoutzookeeper-gw9-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.6.4, port:2181, use_ssl:False Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED run_kazoo_commands_with_retries: zoo1, Sending request(xid=1): GetChildren(path='/clickhouse', watcher=None) Received response(xid=1): ['sessions', 'tables', 'task_queue'] Sending request(xid=2): GetChildren(path='/clickhouse/sessions', watcher=None) Received response(xid=2): ['zookeeper'] Sending request(xid=3): GetChildren(path='/clickhouse/sessions/zookeeper', watcher=None) Received response(xid=3): ['cca05b83-6180-4f37-bfd6-a42f415f8a0a'] Sending request(xid=4): GetChildren(path='/clickhouse/sessions/zookeeper/cca05b83-6180-4f37-bfd6-a42f415f8a0a', watcher=None) Received response(xid=4): [] Sending request(xid=5): Delete(path='/clickhouse/sessions/zookeeper/cca05b83-6180-4f37-bfd6-a42f415f8a0a', version=-1) Received response(xid=5): True Sending request(xid=6): Delete(path='/clickhouse/sessions/zookeeper', version=-1) Received response(xid=6): True Sending request(xid=7): Delete(path='/clickhouse/sessions', version=-1) Received response(xid=7): True Sending request(xid=8): GetChildren(path='/clickhouse/tables', watcher=None) Received response(xid=8): ['test'] Sending request(xid=9): GetChildren(path='/clickhouse/tables/test', watcher=None) Received response(xid=9): ['replicated'] Sending request(xid=10): GetChildren(path='/clickhouse/tables/test/replicated', watcher=None) Received response(xid=10): ['log', 'async_blocks', 'part_moves_shard', 'blocks', 'nonincrement_block_numbers', 'leader_election', 'columns', 'alter_partition_version', 'replicas', 'metadata', 'table_shared_id', 'temp', 'mutations', 'block_numbers', 'pinned_part_uuids', 'lost_part_count', 'quorum'] Sending request(xid=11): GetChildren(path='/clickhouse/tables/test/replicated/log', watcher=None) Received response(xid=11): ['log-0000000000'] Sending request(xid=12): GetChildren(path='/clickhouse/tables/test/replicated/log/log-0000000000', watcher=None) Received response(xid=12): [] Sending request(xid=13): Delete(path='/clickhouse/tables/test/replicated/log/log-0000000000', version=-1) Received response(xid=13): True Sending request(xid=14): Delete(path='/clickhouse/tables/test/replicated/log', version=-1) Received response(xid=14): True Sending request(xid=15): GetChildren(path='/clickhouse/tables/test/replicated/async_blocks', watcher=None) Received response(xid=15): [] Sending request(xid=16): Delete(path='/clickhouse/tables/test/replicated/async_blocks', version=-1) Received response(xid=16): True Sending request(xid=17): GetChildren(path='/clickhouse/tables/test/replicated/part_moves_shard', watcher=None) Received response(xid=17): [] Sending request(xid=18): Delete(path='/clickhouse/tables/test/replicated/part_moves_shard', version=-1) Received response(xid=18): True Sending request(xid=19): GetChildren(path='/clickhouse/tables/test/replicated/blocks', watcher=None) Received response(xid=19): ['201810_2956868034535131113_3759223844523231509'] Sending request(xid=20): GetChildren(path='/clickhouse/tables/test/replicated/blocks/201810_2956868034535131113_3759223844523231509', watcher=None) Received response(xid=20): [] Sending request(xid=21): Delete(path='/clickhouse/tables/test/replicated/blocks/201810_2956868034535131113_3759223844523231509', version=-1) Received response(xid=21): True Sending request(xid=22): Delete(path='/clickhouse/tables/test/replicated/blocks', version=-1) Received response(xid=22): True Sending request(xid=23): GetChildren(path='/clickhouse/tables/test/replicated/nonincrement_block_numbers', watcher=None) Received response(xid=23): [] Sending request(xid=24): Delete(path='/clickhouse/tables/test/replicated/nonincrement_block_numbers', version=-1) Received response(xid=24): True Sending request(xid=25): GetChildren(path='/clickhouse/tables/test/replicated/leader_election', watcher=None) Received response(xid=25): ['leader_election-0'] Sending request(xid=26): GetChildren(path='/clickhouse/tables/test/replicated/leader_election/leader_election-0', watcher=None) Received response(xid=26): [] Sending request(xid=27): Delete(path='/clickhouse/tables/test/replicated/leader_election/leader_election-0', version=-1) Received response(xid=27): True Sending request(xid=28): Delete(path='/clickhouse/tables/test/replicated/leader_election', version=-1) Received response(xid=28): True Sending request(xid=29): GetChildren(path='/clickhouse/tables/test/replicated/columns', watcher=None) Received response(xid=29): [] Sending request(xid=30): Delete(path='/clickhouse/tables/test/replicated/columns', version=-1) Received response(xid=30): True Sending request(xid=31): GetChildren(path='/clickhouse/tables/test/replicated/alter_partition_version', watcher=None) Received response(xid=31): [] Sending request(xid=32): Delete(path='/clickhouse/tables/test/replicated/alter_partition_version', version=-1) Received response(xid=32): True Sending request(xid=33): GetChildren(path='/clickhouse/tables/test/replicated/replicas', watcher=None) Received response(xid=33): ['node1'] Sending request(xid=34): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1', watcher=None) Received response(xid=34): ['min_unprocessed_insert_time', 'parts', 'max_processed_insert_time', 'is_active', 'columns', 'creator_info', 'queue', 'flags', 'mutation_pointer', 'is_lost', 'log_pointer', 'host', 'metadata_version', 'metadata'] Sending request(xid=35): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/min_unprocessed_insert_time', watcher=None) Received response(xid=35): [] Sending request(xid=36): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/min_unprocessed_insert_time', version=-1) Received response(xid=36): True Sending request(xid=37): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/parts', watcher=None) Received response(xid=37): ['201810_0_0_0'] Sending request(xid=38): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/parts/201810_0_0_0', watcher=None) Received response(xid=38): [] Sending request(xid=39): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/parts/201810_0_0_0', version=-1) Received response(xid=39): True Sending request(xid=40): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/parts', version=-1) Received response(xid=40): True Sending request(xid=41): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/max_processed_insert_time', watcher=None) Received response(xid=41): [] Sending request(xid=42): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/max_processed_insert_time', version=-1) Received response(xid=42): True Sending request(xid=43): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/is_active', watcher=None) Received response(xid=43): [] Sending request(xid=44): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/is_active', version=-1) Received response(xid=44): True Sending request(xid=45): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/columns', watcher=None) Received response(xid=45): [] Sending request(xid=46): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/columns', version=-1) Received response(xid=46): True Sending request(xid=47): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/creator_info', watcher=None) Received response(xid=47): [] Sending request(xid=48): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/creator_info', version=-1) Received response(xid=48): True Sending request(xid=49): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/queue', watcher=None) Received response(xid=49): [] Sending request(xid=50): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/queue', version=-1) Received response(xid=50): True Sending request(xid=51): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/flags', watcher=None) Received response(xid=51): [] Sending request(xid=52): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/flags', version=-1) Received response(xid=52): True Sending request(xid=53): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/mutation_pointer', watcher=None) Received response(xid=53): [] Sending request(xid=54): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/mutation_pointer', version=-1) Received response(xid=54): True Sending request(xid=55): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/is_lost', watcher=None) Received response(xid=55): [] Sending request(xid=56): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/is_lost', version=-1) Received response(xid=56): True Sending request(xid=57): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/log_pointer', watcher=None) Received response(xid=57): [] Sending request(xid=58): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/log_pointer', version=-1) Received response(xid=58): True Sending request(xid=59): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/host', watcher=None) Received response(xid=59): [] Sending request(xid=60): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/host', version=-1) Received response(xid=60): True Sending request(xid=61): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/metadata_version', watcher=None) Received response(xid=61): [] Sending request(xid=62): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/metadata_version', version=-1) Received response(xid=62): True Sending request(xid=63): GetChildren(path='/clickhouse/tables/test/replicated/replicas/node1/metadata', watcher=None) Received response(xid=63): [] Sending request(xid=64): Delete(path='/clickhouse/tables/test/replicated/replicas/node1/metadata', version=-1) Received response(xid=64): True Sending request(xid=65): Delete(path='/clickhouse/tables/test/replicated/replicas/node1', version=-1) Received response(xid=65): True Sending request(xid=66): Delete(path='/clickhouse/tables/test/replicated/replicas', version=-1) Received response(xid=66): True Sending request(xid=67): GetChildren(path='/clickhouse/tables/test/replicated/metadata', watcher=None) Received response(xid=67): [] Sending request(xid=68): Delete(path='/clickhouse/tables/test/replicated/metadata', version=-1) Received response(xid=68): True Sending request(xid=69): GetChildren(path='/clickhouse/tables/test/replicated/table_shared_id', watcher=None) Received response(xid=69): [] Sending request(xid=70): Delete(path='/clickhouse/tables/test/replicated/table_shared_id', version=-1) Received response(xid=70): True Sending request(xid=71): GetChildren(path='/clickhouse/tables/test/replicated/temp', watcher=None) Received response(xid=71): ['abandonable_lock-insert', 'abandonable_lock-other'] Sending request(xid=72): GetChildren(path='/clickhouse/tables/test/replicated/temp/abandonable_lock-insert', watcher=None) Received response(xid=72): [] Sending request(xid=73): Delete(path='/clickhouse/tables/test/replicated/temp/abandonable_lock-insert', version=-1) Received response(xid=73): True Sending request(xid=74): GetChildren(path='/clickhouse/tables/test/replicated/temp/abandonable_lock-other', watcher=None) Received response(xid=74): [] Sending request(xid=75): Delete(path='/clickhouse/tables/test/replicated/temp/abandonable_lock-other', version=-1) Received response(xid=75): True Sending request(xid=76): Delete(path='/clickhouse/tables/test/replicated/temp', version=-1) Received response(xid=76): True Sending request(xid=77): GetChildren(path='/clickhouse/tables/test/replicated/mutations', watcher=None) Received response(xid=77): [] Sending request(xid=78): Delete(path='/clickhouse/tables/test/replicated/mutations', version=-1) Received response(xid=78): True Sending request(xid=79): GetChildren(path='/clickhouse/tables/test/replicated/block_numbers', watcher=None) Received response(xid=79): ['201810'] Sending request(xid=80): GetChildren(path='/clickhouse/tables/test/replicated/block_numbers/201810', watcher=None) Received response(xid=80): [] Sending request(xid=81): Delete(path='/clickhouse/tables/test/replicated/block_numbers/201810', version=-1) Received response(xid=81): True Sending request(xid=82): Delete(path='/clickhouse/tables/test/replicated/block_numbers', version=-1) Received response(xid=82): True Sending request(xid=83): GetChildren(path='/clickhouse/tables/test/replicated/pinned_part_uuids', watcher=None) Received response(xid=83): [] Sending request(xid=84): Delete(path='/clickhouse/tables/test/replicated/pinned_part_uuids', version=-1) Received response(xid=84): True Sending request(xid=85): GetChildren(path='/clickhouse/tables/test/replicated/lost_part_count', watcher=None) Received response(xid=85): [] Sending request(xid=86): Delete(path='/clickhouse/tables/test/replicated/lost_part_count', version=-1) Received response(xid=86): True Sending request(xid=87): GetChildren(path='/clickhouse/tables/test/replicated/quorum', watcher=None) Received response(xid=87): ['parallel', 'failed_parts', 'last_part'] Sending request(xid=88): GetChildren(path='/clickhouse/tables/test/replicated/quorum/parallel', watcher=None) Received response(xid=88): [] Sending request(xid=89): Delete(path='/clickhouse/tables/test/replicated/quorum/parallel', version=-1) Received response(xid=89): True Sending request(xid=90): GetChildren(path='/clickhouse/tables/test/replicated/quorum/failed_parts', watcher=None) Received response(xid=90): [] Sending request(xid=91): Delete(path='/clickhouse/tables/test/replicated/quorum/failed_parts', version=-1) Received response(xid=91): True Sending request(xid=92): GetChildren(path='/clickhouse/tables/test/replicated/quorum/last_part', watcher=None) Received response(xid=92): [] Sending request(xid=93): Delete(path='/clickhouse/tables/test/replicated/quorum/last_part', version=-1) Received response(xid=93): True Sending request(xid=94): Delete(path='/clickhouse/tables/test/replicated/quorum', version=-1) Received response(xid=94): True Sending request(xid=95): Delete(path='/clickhouse/tables/test/replicated', version=-1) Received response(xid=95): True Sending request(xid=96): Delete(path='/clickhouse/tables/test', version=-1) Received response(xid=96): True Sending request(xid=97): Delete(path='/clickhouse/tables', version=-1) Received response(xid=97): True Sending request(xid=98): GetChildren(path='/clickhouse/task_queue', watcher=None) Received response(xid=98): ['replicas', 'ddl'] Sending request(xid=99): GetChildren(path='/clickhouse/task_queue/replicas', watcher=None) Received response(xid=99): ['node1:9000'] Sending request(xid=100): GetChildren(path='/clickhouse/task_queue/replicas/node1:9000', watcher=None) Received response(xid=100): ['active'] Sending request(xid=101): GetChildren(path='/clickhouse/task_queue/replicas/node1:9000/active', watcher=None) Received response(xid=101): [] Sending request(xid=102): Delete(path='/clickhouse/task_queue/replicas/node1:9000/active', version=-1) Received response(xid=102): True Sending request(xid=103): Delete(path='/clickhouse/task_queue/replicas/node1:9000', version=-1) Received response(xid=103): True Sending request(xid=104): Delete(path='/clickhouse/task_queue/replicas', version=-1) Received response(xid=104): True Sending request(xid=105): GetChildren(path='/clickhouse/task_queue/ddl', watcher=None) Received response(xid=105): [] Sending request(xid=106): Delete(path='/clickhouse/task_queue/ddl', version=-1) Received response(xid=106): True Sending request(xid=107): Delete(path='/clickhouse/task_queue', version=-1) Received response(xid=107): True Sending request(xid=108): Delete(path='/clickhouse', version=-1) Received response(xid=108): True Sending request(xid=109): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED http://localhost:None "GET /v1.46/exec/ee32b548b740101d39b93102c926700128ae18a53368ab48d5fe78429369562e/json HTTP/1.1" 200 584 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/2c027f515257078859e7c52519b640dfd5b3863ee9c60f00929554af910dedd9/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/2c027f515257078859e7c52519b640dfd5b3863ee9c60f00929554af910dedd9/json HTTP/1.1" 200 586 Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (18): ['data/ber/gkqbpmzqdahflzrjaabxupjlvjwtg', 'data/ebi/wvspvqaoqtmmdxpqlzdumcklonlkz', 'data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/fyh/emamaltdihtmwmxqtpuipzmplsgid', 'data/hed/hxepiztdvesbxpytutndweqtqgwlu', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh', 'data/mgu/cltnnmtcdsncbbzcsaicjydqdjmye', 'data/osa/hjtpjfldimrtfizyxqlgjfxwaqeqv', 'data/pel/ccutxxxyhekhcxkaykexajhbddgka', 'data/pun/aqpjahaaabehgygjjecmgykvukgya', 'data/rhq/idguavenoqmoythotgpwnyjtyrseo', 'data/sqz/ohrmuhfjdwtogdkovxuzaxuhvnezm', 'data/svq/vcgvsgbmcxoljrfrklpaazqpznqwi', 'data/uxe/ykxoporszvtptpvpetbrpufjzcjyr', 'data/vxa/asrjpbhfwrnpjnwbmpefsduuqlfnt', 'data/wcz/iqmiowyiyjykjqqybzyyamjgtgott', 'data/wek/sjlmflyzymfscejvucctykfybyjos', 'data/wwe/wlqfsshomuspexhgpeegvddvpbnqw'] Stderr: node3 Skipped - Image is already being pulled by node2 Stderr: node1 Skipped - Image is already being pulled by node2 Stderr: zoo1 Skipped - Image is already being pulled by node2 Stderr: zoo2 Skipped - Image is already being pulled by node2 Stderr: zoo3 Skipped - Image is already being pulled by node2 Stderr: node2 Pulling Stderr: node2 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper1/log', '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper1/config', '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper1/coordination', '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper2/log', '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper2/config', '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper2/coordination', '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper3/log', '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper3/config', '/ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/keeper3/coordination'] Command:[docker compose --project-name roottestreplicacanbecomeleader-gw6 --env-file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Stdout:7463 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:7463 Executing query select 20 on node1 Stderr:time="2025-04-02T03:21:32Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestreplicacanbecomeleader-gw6_default Creating Stderr: Network roottestreplicacanbecomeleader-gw6_default Created Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Creating Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Creating Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Creating Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Created Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Created Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Created Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Starting Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Starting Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Starting Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Started Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Started Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Started Stderr:time="2025-04-02T03:21:32Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:21:32Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.1.2, port:2181, use_ssl:False Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (18): ['data/ber/gkqbpmzqdahflzrjaabxupjlvjwtg', 'data/ebi/wvspvqaoqtmmdxpqlzdumcklonlkz', 'data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/fyh/emamaltdihtmwmxqtpuipzmplsgid', 'data/hed/hxepiztdvesbxpytutndweqtqgwlu', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh', 'data/mgu/cltnnmtcdsncbbzcsaicjydqdjmye', 'data/osa/hjtpjfldimrtfizyxqlgjfxwaqeqv', 'data/pel/ccutxxxyhekhcxkaykexajhbddgka', 'data/pun/aqpjahaaabehgygjjecmgykvukgya', 'data/rhq/idguavenoqmoythotgpwnyjtyrseo', 'data/sqz/ohrmuhfjdwtogdkovxuzaxuhvnezm', 'data/svq/vcgvsgbmcxoljrfrklpaazqpznqwi', 'data/uxe/ykxoporszvtptpvpetbrpufjzcjyr', 'data/vxa/asrjpbhfwrnpjnwbmpefsduuqlfnt', 'data/wcz/iqmiowyiyjykjqqybzyyamjgtgott', 'data/wek/sjlmflyzymfscejvucctykfybyjos', 'data/wwe/wlqfsshomuspexhgpeegvddvpbnqw'] Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query select 20 on node1 Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 7463 ? 00:00:04 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:7463 http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (10): ['data/ber/gkqbpmzqdahflzrjaabxupjlvjwtg', 'data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/fyh/emamaltdihtmwmxqtpuipzmplsgid', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh', 'data/pun/aqpjahaaabehgygjjecmgykvukgya', 'data/rhq/idguavenoqmoythotgpwnyjtyrseo', 'data/sqz/ohrmuhfjdwtogdkovxuzaxuhvnezm', 'data/svq/vcgvsgbmcxoljrfrklpaazqpznqwi', 'data/wek/sjlmflyzymfscejvucctykfybyjos', 'data/wwe/wlqfsshomuspexhgpeegvddvpbnqw'] Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:7463 Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (10): ['data/ber/gkqbpmzqdahflzrjaabxupjlvjwtg', 'data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/fyh/emamaltdihtmwmxqtpuipzmplsgid', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh', 'data/pun/aqpjahaaabehgygjjecmgykvukgya', 'data/rhq/idguavenoqmoythotgpwnyjtyrseo', 'data/sqz/ohrmuhfjdwtogdkovxuzaxuhvnezm', 'data/svq/vcgvsgbmcxoljrfrklpaazqpznqwi', 'data/wek/sjlmflyzymfscejvucctykfybyjos', 'data/wwe/wlqfsshomuspexhgpeegvddvpbnqw'] Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:7463 http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (10): ['data/ber/gkqbpmzqdahflzrjaabxupjlvjwtg', 'data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/fyh/emamaltdihtmwmxqtpuipzmplsgid', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh', 'data/pun/aqpjahaaabehgygjjecmgykvukgya', 'data/rhq/idguavenoqmoythotgpwnyjtyrseo', 'data/sqz/ohrmuhfjdwtogdkovxuzaxuhvnezm', 'data/svq/vcgvsgbmcxoljrfrklpaazqpznqwi', 'data/wek/sjlmflyzymfscejvucctykfybyjos', 'data/wwe/wlqfsshomuspexhgpeegvddvpbnqw'] Executing query SELECT COUNT(*) from test_table on node1 Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query INSERT INTO test_table VALUES ('2018-10-01', 1), ('2018-10-02', 2), ('2018-10-03', 3) on node1 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.1.4, port:2181, use_ssl:False Connecting to 172.16.1.4(172.16.1.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.1.3, port:2181, use_ssl:False Connecting to 172.16.1.3(172.16.1.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Stdout:7463 Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/.env --project-name roottestreplicacanbecomeleader-gw6 --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/.env --project-name roottestreplicacanbecomeleader-gw6 --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/docker-compose.yml up -d --no-recreate] run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 8 ? 00:00:02 clickhouse run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c pkill clickhouse] run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (2): ['data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh'] http://172.16.7.7:9001 "GET /root?delimiter=&encoding-type=url&list-type=2&max-keys=1000&prefix=data%2F HTTP/1.1" 200 0 list_objects (2): ['data/exl/ozlimqqduigjczbkhbiaybrntmlnp', 'data/ihn/oiqztjctdamjikjbsllkiyfcohyqh'] Executing query DROP TABLE IF EXISTS test_hardlinks_preserved_when_projection_dropped SYNC on node1 Stdout:8 Executing query DROP TABLE IF EXISTS test_hardlinks_preserved_when_projection_dropped SYNC on node2 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Running Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Running Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Running Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Creating Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Creating Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Creating Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Created Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Created Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Created Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Starting Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Starting Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Starting Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Started Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Started Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.1.7... http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None Command:[docker compose --env-file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env --project-name roottestreplicatedzerocopyprojectionmutation-gw8 --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/docker-compose.yml stop --timeout 20] [gw8] PASSED test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None Stdout:7463 http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/3941b1b64dbc7793ee61342f735ccd0e214d4aa5e506d0abdf004ff6eb23abd5/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.1.6... http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/1c5bdb6bedb9e2cda01f8ef24602fd19a0c08deabf6bdaf8248b37f3d5edf6ea/json HTTP/1.1" 200 None ClickHouse node2 started get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-node3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-node3-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node3, ip: 172.16.1.5... http://localhost:None "GET /v1.46/containers/roottestreplicacanbecomeleader-gw6-node3-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/64fa42eba0410400545378a1a7cf9117c33cc3163e019906287da170a2dbb6be/json HTTP/1.1" 200 None ClickHouse node3 started Executing query CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_table', '0') PARTITION BY date ORDER BY id on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:7463 run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8 Executing query CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_table', '1') PARTITION BY date ORDER BY id on node2 Connection dropped: socket connection error: No route to host Connection dropped: socket connection error: No route to host Connection dropped: socket connection error: No route to host Executing query CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_table', '3') PARTITION BY date ORDER BY id SETTINGS replicated_can_become_leader=0sad on node3 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query select can_become_leader from system.replicas where table = 'test_table' on node1 Stdout:7463 run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Executing query select can_become_leader from system.replicas where table = 'test_table' on node2 Stdout:8 Command:[docker compose --env-file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/.env --project-name roottestreplicacanbecomeleader-gw6 --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/docker-compose.yml stop --timeout 20] [gw6] PASSED test_replica_can_become_leader/test.py::test_can_become_leader run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/exec/2c027f515257078859e7c52519b640dfd5b3863ee9c60f00929554af910dedd9/json HTTP/1.1" 200 584 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/4d2ef4869cecfbb20581a725a6bbf15db373792a4aba89e49e4223d1d77e3a79/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/4d2ef4869cecfbb20581a725a6bbf15db373792a4aba89e49e4223d1d77e3a79/json HTTP/1.1" 200 586 run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestreplicationwithoutzookeeper-gw9-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/98bc7ec2488136c845bc81a9329349014effaeeb52409927a02bced7ff88d4b5/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/98bc7ec2488136c845bc81a9329349014effaeeb52409927a02bced7ff88d4b5/json HTTP/1.1" 200 586 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8314 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8314 Executing query select 20 on node1 run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stderr: Container roottestreadonlytable-gw5-node1-1 Stopping Stderr: Container roottestreadonlytable-gw5-node2-1 Stopping Stderr: Container roottestreadonlytable-gw5-node3-1 Stopping Connection dropped: socket connection error: No route to host Stderr: Container roottestreadonlytable-gw5-node3-1 Stopped Stderr: Container roottestreadonlytable-gw5-node2-1 Stopped Stderr: Container roottestreadonlytable-gw5-node1-1 Stopped Stderr: Container roottestreadonlytable-gw5-zoo2-1 Stopping Stderr: Container roottestreadonlytable-gw5-zoo3-1 Stopping Stderr: Container roottestreadonlytable-gw5-zoo1-1 Stopping Stderr: Container roottestreadonlytable-gw5-zoo1-1 Stopped Stderr: Container roottestreadonlytable-gw5-zoo3-1 Stopped Stderr: Container roottestreadonlytable-gw5-zoo2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Stdout:780 Clickhouse process running. run container_id:roottestreplicationwithoutzookeeper-gw9-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestreplicationwithoutzookeeper-gw9-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Command:[bash -c [ -f /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/.env --project-name roottestreadonlytable-gw5 --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_read_only_table/_instances-0-gw5/node3/docker-compose.yml down --volumes] Stdout:780 Executing query select 20 on node1 Executing query select 20 on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Stderr: Container roottestreadonlytable-gw5-node1-1 Stopping Stderr: Container roottestreadonlytable-gw5-node3-1 Stopping Stderr: Container roottestreadonlytable-gw5-node2-1 Stopping Stderr: Container roottestreadonlytable-gw5-node1-1 Stopped Stderr: Container roottestreadonlytable-gw5-node1-1 Removing Stderr: Container roottestreadonlytable-gw5-node3-1 Stopped Stderr: Container roottestreadonlytable-gw5-node3-1 Removing Stderr: Container roottestreadonlytable-gw5-node2-1 Stopped Stderr: Container roottestreadonlytable-gw5-node2-1 Removing Stderr: Container roottestreadonlytable-gw5-node1-1 Removed Stderr: Container roottestreadonlytable-gw5-node2-1 Removed Stderr: Container roottestreadonlytable-gw5-node3-1 Removed Stderr: Container roottestreadonlytable-gw5-zoo1-1 Stopping Stderr: Container roottestreadonlytable-gw5-zoo2-1 Stopping Stderr: Container roottestreadonlytable-gw5-zoo3-1 Stopping Stderr: Container roottestreadonlytable-gw5-zoo1-1 Stopped Stderr: Container roottestreadonlytable-gw5-zoo1-1 Removing Stderr: Container roottestreadonlytable-gw5-zoo3-1 Stopped Stderr: Container roottestreadonlytable-gw5-zoo3-1 Removing Stderr: Container roottestreadonlytable-gw5-zoo2-1 Stopped Stderr: Container roottestreadonlytable-gw5-zoo2-1 Removing Stderr: Container roottestreadonlytable-gw5-zoo1-1 Removed Stderr: Container roottestreadonlytable-gw5-zoo2-1 Removed Stderr: Container roottestreadonlytable-gw5-zoo3-1 Removed Stderr: Network roottestreadonlytable-gw5_default Removing Stderr: Network roottestreadonlytable-gw5_default Removed Cleanup called Docker networks for project roottestreadonlytable-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreadonlytable-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreadonlytable-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreadonlytable-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreadonlytable-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Executing query select 20 on node1 Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 test_replica_is_active/test.py::test_replica_is_active Running tests in /ClickHouse/tests/integration/test_replica_is_active/test.py Cluster start called. is_up=False Docker networks for project roottestreplicaisactive-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicaisactive-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicaisactive-gw5 are DRIVER VOLUME NAME Cleanup called Docker networks for project roottestreplicaisactive-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicaisactive-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicaisactive-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicaisactive-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreplicaisactive-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Executing query SELECT COUNT(*) from test_table on node1 Stdout:Total reclaimed space: 0B Volumes pruned: 3 Setup directory for instance: node1 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/database Setup logs dir /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node2 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/database Setup logs dir /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Setup directory for instance: node3 Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files [] to /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/configs/config.d Setup database dir /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/database Setup logs dir /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/logs Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:8b2301119731', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/.env --project-name roottestreplicaisactive-gw5 --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/docker-compose.yml pull] Executing query select 20 on node1 Executing query SELECT is_readonly from system.replicas where table='test_table' on node1 Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 8314 ? 00:00:05 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Command:[docker compose --env-file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/.env --project-name roottestreplicationwithoutzookeeper-gw9 --file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml stop --timeout 20] [gw9] PASSED test_replication_without_zookeeper/test.py::test_startup_without_zookeeper Stdout:8314 Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster' and host_name='node_1' on node Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster2' and host_name='node_1' on node run container_id:roottestreloadclustersconfig-gw7-node-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n \n \n true\n \n node_1\n 9000\n \n \n node_2\n 9000\n \n \n \n \n \n true\n \n node_1\n 9000\n \n \n node_2\n 9000\n \n \n \n \n \n true\n \n node_1\n 9000\n \n \n \n \n\n' > /etc/clickhouse-server/config.d/remote_servers.xml"] Command:[docker exec roottestreloadclustersconfig-gw7-node-1 bash -c echo ' true node_1 9000 node_2 9000 true node_1 9000 node_2 9000 true node_1 9000 ' > /etc/clickhouse-server/config.d/remote_servers.xml] Executing query SYSTEM RELOAD CONFIG on node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8314 Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Stopping Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Stopped Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Stopping Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Stopping Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Stopping Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Stopped Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Stopped Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/.env --project-name roottestreplicationwithoutzookeeper-gw9 --file /ClickHouse/tests/integration/test_replication_without_zookeeper/_instances-0-gw9/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml down --volumes] Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/.env --project-name roottestreplicacanbecomeleader-gw6 --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_can_become_leader/_instances-0-gw6/node3/docker-compose.yml down --volumes] Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Stopping Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Stopped Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Removing Stderr: Container roottestreplicationwithoutzookeeper-gw9-node1-1 Removed Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Stopping Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Stopping Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Stopping Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Stopped Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Removing Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Stopped Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Removing Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Stopped Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Removing Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo3-1 Removed Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo2-1 Removed Stderr: Container roottestreplicationwithoutzookeeper-gw9-zoo1-1 Removed Stderr: Network roottestreplicationwithoutzookeeper-gw9_default Removing Stderr: Network roottestreplicationwithoutzookeeper-gw9_default Removed Cleanup called run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Docker networks for project roottestreplicationwithoutzookeeper-gw9 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicationwithoutzookeeper-gw9 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicationwithoutzookeeper-gw9 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicationwithoutzookeeper-gw9-.*-1$' --format '{{.ID}}:{{.Names}}'] Stdout:8314 Unstopped containers: {} No running containers for project: roottestreplicationwithoutzookeeper-gw9 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Stdout:Total reclaimed space: 0B Volumes pruned: 3 Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Removing Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Removing Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Removing Stderr: Container roottestreplicacanbecomeleader-gw6-node2-1 Removed Stderr: Container roottestreplicacanbecomeleader-gw6-node1-1 Removed Stderr: Container roottestreplicacanbecomeleader-gw6-node3-1 Removed Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Stopping Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Removing Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Removing Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Stopped Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Removing Stderr: Container roottestreplicacanbecomeleader-gw6-zoo2-1 Removed Stderr: Container roottestreplicacanbecomeleader-gw6-zoo3-1 Removed Stderr: Container roottestreplicacanbecomeleader-gw6-zoo1-1 Removed Stderr: Network roottestreplicacanbecomeleader-gw6_default Removing Stderr: Network roottestreplicacanbecomeleader-gw6_default Removed Cleanup called Docker networks for project roottestreplicacanbecomeleader-gw6 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicacanbecomeleader-gw6 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicacanbecomeleader-gw6 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicacanbecomeleader-gw6-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreplicacanbecomeleader-gw6 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:3 Command:[docker volume prune -f] Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Stdout:Total reclaimed space: 0B Volumes pruned: 3 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8314 Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster' and host_name='node_1' on node Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster2' and host_name='node_1' on node Executing query SELECT * FROM system.clusters WHERE cluster='test_cluster3' on node run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8314 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8314 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:8314 Stdout:9129 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] http://localhost:None "GET /v1.46/exec/4d2ef4869cecfbb20581a725a6bbf15db373792a4aba89e49e4223d1d77e3a79/json HTTP/1.1" 200 584 Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log ] && zgrep -aH "view refreshes failed to stop" /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log ] && zgrep -aH "Closed connections. But" /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log ] && zgrep -aH "Will shutdown forcefully." /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log ] && zgrep -aH "##########" /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log ] && zgrep -aH "===test_refresh_vs_shutdown_smoke start===" /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Stdout:/ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log.8.gz:2025.04.02 03:20:11.930970 [ 795 ] {2a4cc11e-bec5-420c-b553-c630619a5fb4} executeQuery: (from 172.16.8.1:56124) (query 1, line 1) select '===test_refresh_vs_shutdown_smoke start===' (stage: Complete) Stdout:/ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/clickhouse-server.log.8.gz:2025.04.02 03:20:11.931039 [ 795 ] {2a4cc11e-bec5-420c-b553-c630619a5fb4} CancellationChecker: Did not add the task because the timeout is 0. Query: select '===test_refresh_vs_shutdown_smoke start===' run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/c21655dde7ccd3403a8c9bbcea76c34b312f4f778236d9506d8dc1d9b14930fc/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/c21655dde7ccd3403a8c9bbcea76c34b312f4f778236d9506d8dc1d9b14930fc/json HTTP/1.1" 200 586 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 Executing query select 20 on node1 Executing query select 20 on node1 Stderr: node2 Skipped - Image is already being pulled by zoo1 Stderr: node3 Skipped - Image is already being pulled by zoo1 Stderr: node1 Skipped - Image is already being pulled by zoo1 Stderr: zoo2 Skipped - Image is already being pulled by zoo1 Stderr: zoo3 Skipped - Image is already being pulled by zoo1 Stderr: zoo1 Pulling Stderr: zoo1 Pulled Setup ZooKeeper Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper1/log', '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper1/config', '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper1/coordination', '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper2/log', '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper2/config', '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper2/coordination', '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper3/log', '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper3/config', '/ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/keeper3/coordination'] Command:[docker compose --project-name roottestreplicaisactive-gw5 --env-file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] Executing query select 20 on node1 Stderr:time="2025-04-02T03:21:54Z" level=trace msg="Docker Desktop integration not enabled" Stderr: Network roottestreplicaisactive-gw5_default Creating Stderr: Network roottestreplicaisactive-gw5_default Created Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Creating Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Creating Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Creating Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Created Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Created Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Created Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Starting Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Starting Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Starting Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Started Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Started Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Started Stderr:time="2025-04-02T03:21:54Z" level=debug msg="otel error" error="" Stderr:time="2025-04-02T03:21:54Z" level=debug msg="otel error" error="" Wait ZooKeeper to start get_instance_ip instance_name=zoo1 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-zoo1-1/json HTTP/1.1" 200 None get_kazoo_client: zoo1, ip:172.16.1.2, port:2181, use_ssl:False Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query drop database re sync on node1 Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Executing query drop database re sync on node2 Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Connection dropped: socket connection error: Connection refused Connecting to 172.16.1.2(172.16.1.2):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost run container_id:roottestreloadclustersconfig-gw7-node-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n \n \n true\n \n node_1\n 9000\n \n \n node_2\n 9000\n \n \n \n \n \n true\n \n node_1\n 9000\n \n \n node_2\n 9000\n \n \n \n \n\n' > /etc/clickhouse-server/config.d/remote_servers.xml"] Command:[docker exec roottestreloadclustersconfig-gw7-node-1 bash -c echo ' true node_1 9000 node_2 9000 true node_1 9000 node_2 9000 ' > /etc/clickhouse-server/config.d/remote_servers.xml] Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo2 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-zoo2-1/json HTTP/1.1" 200 None get_kazoo_client: zoo2, ip:172.16.1.4, port:2181, use_ssl:False Connecting to 172.16.1.4(172.16.1.4):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Executing query SYSTEM RELOAD CONFIG on node Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED get_instance_ip instance_name=zoo3 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-zoo3-1/json HTTP/1.1" 200 None get_kazoo_client: zoo3, ip:172.16.1.3, port:2181, use_ssl:False Connecting to 172.16.1.3(172.16.1.3):2181, use_ssl: False Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) Zookeeper connection established, state: CONNECTED Sending request(xid=1): GetChildren(path='/', watcher=None) Received response(xid=1): ['keeper'] Sending request(xid=2): Close() Connection dropped: socket connection broken Transition to CONNECTING Zookeeper connection lost Failed connecting to Zookeeper within the connection retry policy. Zookeeper session closed, state: CLOSED All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/.env --project-name roottestreplicaisactive-gw5 --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/.env --project-name roottestreplicaisactive-gw5 --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/docker-compose.yml up -d --no-recreate] Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/.env --project-name roottestreplicatedzerocopyprojectionmutation-gw8 --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_minio.yml --file /ClickHouse/tests/integration/test_replicated_zero_copy_projection_mutation/_instances-0-gw8/node2/docker-compose.yml down --volumes] [gw7] PASSED test_reload_clusters_config/test.py::test_add_cluster test_reload_clusters_config/test.py::test_delete_cluster Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Running Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Running Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Running Stderr: Container roottestreplicaisactive-gw5-node1-1 Creating Stderr: Container roottestreplicaisactive-gw5-node2-1 Creating Stderr: Container roottestreplicaisactive-gw5-node3-1 Creating Stderr: Container roottestreplicaisactive-gw5-node1-1 Created Stderr: Container roottestreplicaisactive-gw5-node2-1 Created Stderr: Container roottestreplicaisactive-gw5-node3-1 Created Stderr: Container roottestreplicaisactive-gw5-node1-1 Starting Stderr: Container roottestreplicaisactive-gw5-node3-1 Starting Stderr: Container roottestreplicaisactive-gw5-node2-1 Starting Stderr: Container roottestreplicaisactive-gw5-node3-1 Started Stderr: Container roottestreplicaisactive-gw5-node1-1 Started Stderr: Container roottestreplicaisactive-gw5-node2-1 Started ClickHouse instance created get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node1-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node1 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node1-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node1, ip: 172.16.1.6... http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node1-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Removing Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Removing Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Removing Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node2-1 Removed Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-node1-1 Removed Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Removing Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Removing Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Removing Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Removing Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-resolver-1 Removed Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo3-1 Removed Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo1-1 Removed Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-zoo2-1 Removed Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-minio1-1 Removed Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Stopping Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Removing Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Stopped Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Removing Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy1-1 Removed Stderr: Container roottestreplicatedzerocopyprojectionmutation-gw8-proxy2-1 Removed Stderr: Volume roottestreplicatedzerocopyprojectionmutation-gw8_data1-1 Removing Stderr: Network roottestreplicatedzerocopyprojectionmutation-gw8_default Removing Stderr: Volume roottestreplicatedzerocopyprojectionmutation-gw8_data1-1 Removed Stderr: Network roottestreplicatedzerocopyprojectionmutation-gw8_default Removed Cleanup called Docker networks for project roottestreplicatedzerocopyprojectionmutation-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicatedzerocopyprojectionmutation-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicatedzerocopyprojectionmutation-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicatedzerocopyprojectionmutation-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreplicatedzerocopyprojectionmutation-gw8 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable Running tests in /ClickHouse/tests/integration/test_runtime_configurable_cache_size/test.py Cluster start called. is_up=False Docker networks for project roottestruntimeconfigurablecachesize-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestruntimeconfigurablecachesize-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestruntimeconfigurablecachesize-gw8 are DRIVER VOLUME NAME Cleanup called http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None Docker networks for project roottestruntimeconfigurablecachesize-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestruntimeconfigurablecachesize-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestruntimeconfigurablecachesize-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestruntimeconfigurablecachesize-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestruntimeconfigurablecachesize-gw8 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] [gw4] PASSED test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None Executing query create database re engine = Replicated('/test/re', 'shard1', '{replica}'); on node1 Stdout:1 Volumes pruned: 1 Setup directory for instance: node Create directory for configuration generated in this helper Create directory for common tests configuration Copy common configuration from helpers Generate and write macros file Copy custom test config files ['/ClickHouse/tests/integration/test_runtime_configurable_cache_size/configs/default.xml'] to /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/configs/config.d Setup database dir /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/database Setup logs dir /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/logs Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon -- ; coproc tail -f /dev/null; wait $$!" Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/.env Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] No config file found http://localhost:None "GET /version HTTP/1.1" 200 826 Command:[docker compose --env-file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/.env --project-name roottestruntimeconfigurablecachesize-gw8 --file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/docker-compose.yml pull] http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None Executing query create database re engine = Replicated('/test/re', 'shard1', '{replica}'); on node2 http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None Executing query create materialized view re.a refresh every 1 second (x Int64) engine Memory as select 1 as x on node1 http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None Executing query create materialized view re.a refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select number*10 as x from numbers(2) on node1 http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/58f44230be1dee46dbf387bcf861b9e6f4c6c64c5d22b7225afc235f3d437f82/json HTTP/1.1" 200 None ClickHouse node1 started get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node2-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node2 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node2-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node2, ip: 172.16.1.7... http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node2-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/c22d30254c9033f9d5b6c1b46740c05e53581a1de9d3e9ecf49863f3f7f9de3d/json HTTP/1.1" 200 None ClickHouse node2 started get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node3-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node3 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node3-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node3, ip: 172.16.1.5... http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node3-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/998348e065f660100fff4a3adaa1a0a50a31329f04ff9f2518e07b3bc9e26190/json HTTP/1.1" 200 None ClickHouse node3 started Executing query CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_table', 'node1') PARTITION BY date ORDER BY id on node1 Executing query system sync database replica re on node1 Executing query CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_table', 'node2') PARTITION BY date ORDER BY id on node2 Executing query system wait view re.a on node1 Executing query CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test_table', 'node3') PARTITION BY date ORDER BY id on node3 Executing query select * from re.a order by all on node1 Executing query select replica_is_active from system.replicas where table = 'test_table' on node1 Executing query select database, view, last_success_time != 0, last_refresh_time != 0, last_refresh_replica in ('1','2'), exception from system.view_refreshes on node1 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node3-1/json HTTP/1.1" 200 None Executing query system wait view re.a on node2 Executing query select * from re.a order by all on node2 Executing query select database, view, last_success_time != 0, last_refresh_time != 0, last_refresh_replica in ('1','2'), exception from system.view_refreshes on node2 Executing query create materialized view re.append refresh every 1 year append (x Int64) engine ReplicatedMergeTree order by x as select rand() as x on node2 Executing query system test view re.append set fake time '2040-01-01 00:00:01' on node1 Executing query system test view re.append set fake time '2040-01-01 00:00:01' on node2 Executing query system wait view re.append; system refresh view re.append; system wait view re.append; on node1 Executing query system wait view re.append; system refresh view re.append; system wait view re.append; on node2 Executing query select count() from re.append on node2 Executing query system test view re.append set fake time '2041-01-01 00:00:01' on node1 Executing query system test view re.append set fake time '2041-01-01 00:00:01' on node2 Executing query select status, last_success_time from system.view_refreshes where view = 'append' on node1 Executing query system wait view re.append on node1 Executing query select status, last_success_time from system.view_refreshes where view = 'append' on node2 Executing query system wait view re.append on node2 Executing query system sync replica re.append on node2 Executing query select count() from re.append on node2 Executing query create materialized view re.append_uncoordinated refresh every 1 year settings all_replicas = 1 append (x Int64) engine ReplicatedMergeTree order by x as select rand() as x on node2 Executing query system test view re.append_uncoordinated set fake time '2040-01-01 00:00:01' on node1 Executing query system test view re.append_uncoordinated set fake time '2040-01-01 00:00:01' on node2 http://localhost:None "POST /v1.46/containers/998348e065f660100fff4a3adaa1a0a50a31329f04ff9f2518e07b3bc9e26190/stop HTTP/1.1" 204 0 Executing query select replica_is_active from system.replicas where table = 'test_table' on node1 Executing query system wait view re.append_uncoordinated; system refresh view re.append_uncoordinated; system wait view re.append_uncoordinated; on node1 http://localhost:None "GET /v1.46/containers/roottestreplicaisactive-gw5-node2-1/json HTTP/1.1" 200 None Executing query system wait view re.append_uncoordinated; system refresh view re.append_uncoordinated; system wait view re.append_uncoordinated; on node2 Executing query select count() from re.append_uncoordinated on node2 Executing query system test view re.append_uncoordinated set fake time '2041-01-01 00:00:01' on node1 Executing query system test view re.append_uncoordinated set fake time '2041-01-01 00:00:01' on node2 Executing query select status, last_success_time from system.view_refreshes where view = 'append_uncoordinated' on node1 Executing query system wait view re.append_uncoordinated on node1 Executing query select status, last_success_time from system.view_refreshes where view = 'append_uncoordinated' on node2 Executing query system wait view re.append_uncoordinated on node2 Executing query system sync replica re.append_uncoordinated on node1 Executing query select count() from re.append_uncoordinated on node1 Executing query create materialized view re.unreplicated_uncoordinated refresh every 1 second settings all_replicas = 1 append (x String) engine Memory as select 1 as x on node1 Executing query system sync database replica re on node2 Executing query system wait view re.unreplicated_uncoordinated on node1 Executing query select distinct x from re.unreplicated_uncoordinated on node1 http://localhost:None "POST /v1.46/containers/c22d30254c9033f9d5b6c1b46740c05e53581a1de9d3e9ecf49863f3f7f9de3d/stop HTTP/1.1" 204 0 Executing query select replica_is_active from system.replicas where table = 'test_table' on node1 Executing query system wait view re.unreplicated_uncoordinated on node2 Command:[docker compose --env-file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/.env --project-name roottestreplicaisactive-gw5 --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/docker-compose.yml stop --timeout 20] [gw5] PASSED test_replica_is_active/test.py::test_replica_is_active Executing query select distinct x from re.unreplicated_uncoordinated on node2 Executing query create materialized view re.c refresh every 1 year (x Int64) engine ReplicatedMergeTree order by x empty as select rand() as x on node2 Executing query system sync database replica re on node1 Stderr: node Pulling Stderr: node Pulled ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/.env --project-name roottestruntimeconfigurablecachesize-gw8 --file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/docker-compose.yml up -d --no-recreate') Command:[docker compose --env-file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/.env --project-name roottestruntimeconfigurablecachesize-gw8 --file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/docker-compose.yml up -d --no-recreate] Executing query rename table re.c to re.d on node1 Executing query alter table re.d modify query select number + sleepEachRow(1) as x from numbers(5) settings max_block_size = 1 on node1 Stderr: Network roottestruntimeconfigurablecachesize-gw8_default Creating Stderr: Network roottestruntimeconfigurablecachesize-gw8_default Created Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Creating Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Created Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Starting Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Started ClickHouse instance created get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestruntimeconfigurablecachesize-gw8-node-1/json HTTP/1.1" 200 None get_instance_ip instance_name=node http://localhost:None "GET /v1.46/containers/roottestruntimeconfigurablecachesize-gw8-node-1/json HTTP/1.1" 200 None Waiting for ClickHouse start in node, ip: 172.16.2.2... http://localhost:None "GET /v1.46/containers/roottestruntimeconfigurablecachesize-gw8-node-1/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None Executing query system refresh view re.d on node1 http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None Executing query select status from system.view_refreshes where view = 'd' on node2 http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None Executing query rename table re.d to re.e on node2 http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None Executing query system wait view re.e on node1 http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None http://localhost:None "GET /v1.46/containers/cb18aab93fd881124fa10a4fe9c6edefa330c2a86b1e76b774312b37c1f1ea57/json HTTP/1.1" 200 None ClickHouse node started Executing query SYSTEM DROP QUERY CACHE on node Executing query SELECT 1 SETTINGS use_query_cache = 1, query_cache_ttl = 1 on node Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Stderr: Container roottestreplicaisactive-gw5-node1-1 Stopping Stderr: Container roottestreplicaisactive-gw5-node2-1 Stopping Stderr: Container roottestreplicaisactive-gw5-node3-1 Stopping Stderr: Container roottestreplicaisactive-gw5-node3-1 Stopped Stderr: Container roottestreplicaisactive-gw5-node2-1 Stopped Stderr: Container roottestreplicaisactive-gw5-node1-1 Stopped Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Stopping Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Stopping Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Stopping Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Stopped Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Stopped Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/.env --project-name roottestreplicaisactive-gw5 --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node2/docker-compose.yml --file /ClickHouse/tests/integration/test_replica_is_active/_instances-0-gw5/node3/docker-compose.yml down --volumes] Connection dropped: socket connection error: None Stderr: Container roottestreplicaisactive-gw5-node1-1 Stopping Stderr: Container roottestreplicaisactive-gw5-node2-1 Stopping Stderr: Container roottestreplicaisactive-gw5-node3-1 Stopping Stderr: Container roottestreplicaisactive-gw5-node1-1 Stopped Stderr: Container roottestreplicaisactive-gw5-node1-1 Removing Stderr: Container roottestreplicaisactive-gw5-node2-1 Stopped Stderr: Container roottestreplicaisactive-gw5-node2-1 Removing Stderr: Container roottestreplicaisactive-gw5-node3-1 Stopped Stderr: Container roottestreplicaisactive-gw5-node3-1 Removing Stderr: Container roottestreplicaisactive-gw5-node2-1 Removed Stderr: Container roottestreplicaisactive-gw5-node3-1 Removed Stderr: Container roottestreplicaisactive-gw5-node1-1 Removed Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Stopping Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Stopping Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Stopping Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Stopped Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Removing Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Stopped Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Removing Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Stopped Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Removing Stderr: Container roottestreplicaisactive-gw5-zoo1-1 Removed Stderr: Container roottestreplicaisactive-gw5-zoo2-1 Removed Stderr: Container roottestreplicaisactive-gw5-zoo3-1 Removed Stderr: Network roottestreplicaisactive-gw5_default Removing Stderr: Network roottestreplicaisactive-gw5_default Removed Cleanup called Docker networks for project roottestreplicaisactive-gw5 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreplicaisactive-gw5 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreplicaisactive-gw5 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreplicaisactive-gw5-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreplicaisactive-gw5 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Connection dropped: socket connection error: None Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False Executing query SELECT count(*) FROM system.query_cache on node run container_id:roottestruntimeconfigurablecachesize-gw8-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/config.d/default.xml) && echo PGNsaWNraG91c2U+CgogICAgPHF1ZXJ5X2NhY2hlPgogICAgICAgIDxtYXhfZW50cmllcz4wPC9tYXhfZW50cmllcz4KICAgIDwvcXVlcnlfY2FjaGU+Cgo8L2NsaWNraG91c2U+Cg== | base64 --decode > /etc/clickhouse-server/config.d/default.xml'] Command:[docker exec roottestruntimeconfigurablecachesize-gw8-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/config.d/default.xml) && echo PGNsaWNraG91c2U+CgogICAgPHF1ZXJ5X2NhY2hlPgogICAgICAgIDxtYXhfZW50cmllcz4wPC9tYXhfZW50cmllcz4KICAgIDwvcXVlcnlfY2FjaGU+Cgo8L2NsaWNraG91c2U+Cg== | base64 --decode > /etc/clickhouse-server/config.d/default.xml] Executing query SYSTEM RELOAD CONFIG on node Executing query SELECT count(*) FROM system.query_cache on node Executing query SELECT 2 SETTINGS use_query_cache = 1, query_cache_ttl = 1 on node Executing query SELECT count(*) FROM system.query_cache on node Executing query SELECT 3 SETTINGS use_query_cache = 1, query_cache_ttl = 1 on node Executing query SELECT count(*) FROM system.query_cache on node run container_id:roottestruntimeconfigurablecachesize-gw8-node-1 detach:False nothrow:False cmd: ['bash', '-c', 'mkdir -p $(dirname /etc/clickhouse-server/config.d/default.xml) && echo PGNsaWNraG91c2U+CgogICAgPHF1ZXJ5X2NhY2hlPgogICAgICAgIDxtYXhfZW50cmllcz4yPC9tYXhfZW50cmllcz4KICAgIDwvcXVlcnlfY2FjaGU+CgogICAgPG1hcmtfY2FjaGVfc2l6ZT40OTY8L21hcmtfY2FjaGVfc2l6ZT4KCjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/config.d/default.xml'] Command:[docker exec roottestruntimeconfigurablecachesize-gw8-node-1 bash -c mkdir -p $(dirname /etc/clickhouse-server/config.d/default.xml) && echo PGNsaWNraG91c2U+CgogICAgPHF1ZXJ5X2NhY2hlPgogICAgICAgIDxtYXhfZW50cmllcz4yPC9tYXhfZW50cmllcz4KICAgIDwvcXVlcnlfY2FjaGU+CgogICAgPG1hcmtfY2FjaGVfc2l6ZT40OTY8L21hcmtfY2FjaGVfc2l6ZT4KCjwvY2xpY2tob3VzZT4K | base64 --decode > /etc/clickhouse-server/config.d/default.xml] Executing query SYSTEM RELOAD CONFIG on node Executing query select * from re.e order by x on node1 Executing query SELECT 4 SETTINGS use_query_cache = 1, query_cache_ttl = 1 on node Executing query create materialized view re.f refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select sleepEachRow(1) as x from numbers(1000000) settings max_block_size = 1 on node1 Executing query SELECT count(*) FROM system.query_cache on node Command:[docker compose --env-file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/.env --project-name roottestruntimeconfigurablecachesize-gw8 --file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/docker-compose.yml stop --timeout 20] [gw8] PASSED test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable Executing query select status in ('Running', 'RunningOnAnotherReplica') from system.view_refreshes where view = 'f' on node2 Connection dropped: socket connection error: None Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Stopping Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Executing query select table, uuid from system.tables where database = 're' on node1 Command:[docker compose --env-file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/.env --project-name roottestruntimeconfigurablecachesize-gw8 --file /ClickHouse/tests/integration/test_runtime_configurable_cache_size/_instances-0-gw8/node/docker-compose.yml down --volumes] Connection dropped: socket connection error: No route to host Executing query select count() from system.zookeeper where path = '/clickhouse/tables/a913a87d-3ab6-4eeb-a8ef-50faf1ccdc2a' and name = 'shard1' on node1 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/cbf4de9c-b4ee-4e1a-80fd-3e0a1f1c3a1b' and name = 'shard1' on node2 Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Stopping Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Stopped Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Removing Stderr: Container roottestruntimeconfigurablecachesize-gw8-node-1 Removed Stderr: Network roottestruntimeconfigurablecachesize-gw8_default Removing Stderr: Network roottestruntimeconfigurablecachesize-gw8_default Removed Cleanup called Docker networks for project roottestruntimeconfigurablecachesize-gw8 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestruntimeconfigurablecachesize-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestruntimeconfigurablecachesize-gw8 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestruntimeconfigurablecachesize-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestruntimeconfigurablecachesize-gw8 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/4cb5a7e8-e858-49aa-bba1-259d1ab02e6f' and name = 'shard1' on node2 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/e414c246-408b-4bc4-ba2d-4950625e292c' and name = 'shard1' on node1 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/9564eede-8526-49fe-8111-ea73311f2771' and name = 'shard1' on node1 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/a430c5f2-7b2f-4e97-8e8c-4f2b3909f523' and name = 'shard1' on node2 Executing query drop table re.a sync on node1 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/a913a87d-3ab6-4eeb-a8ef-50faf1ccdc2a' and name = 'shard1' on node2 Executing query drop table re.append sync on node2 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/cbf4de9c-b4ee-4e1a-80fd-3e0a1f1c3a1b' and name = 'shard1' on node2 Executing query drop table re.append_uncoordinated sync on node2 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/4cb5a7e8-e858-49aa-bba1-259d1ab02e6f' and name = 'shard1' on node1 Executing query drop table re.e on node2 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/e414c246-408b-4bc4-ba2d-4950625e292c' and name = 'shard1' on node2 Executing query drop table re.f on node2 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/9564eede-8526-49fe-8111-ea73311f2771' and name = 'shard1' on node1 Executing query drop table re.unreplicated_uncoordinated sync on node1 Executing query select count() from system.zookeeper where path = '/clickhouse/tables/a430c5f2-7b2f-4e97-8e8c-4f2b3909f523' and name = 'shard1' on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x empty as select 1 as x on node1 Executing query drop table re.g on node2 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x empty as select 1 as x on node2 Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x empty as select 1 as x on node2 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x empty as select 1 as x on node2 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x empty as select 1 as x on node2 Executing query drop table re.g on node2 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query drop table re.g on node2 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node1 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query drop table re.g on node2 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node1 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query drop table re.g on node2 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query drop table re.g on node2 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query drop table re.g on node2 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x empty as select 1 as x on node2 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node2 Executing query drop table re.g on node2 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node1 Executing query drop table re.g on node1 Executing query create materialized view re.g refresh every 1 second (x Int64) engine ReplicatedMergeTree order by x as select 1 as x on node1 Executing query drop table re.g on node2 Executing query show tables from re on node1 Connection dropped: socket connection error: None Executing query show tables from re on node2 Executing query drop database re sync on node1 Executing query drop database re sync on node2 Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node [gw4] PASSED test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db test_refreshable_mv/test.py::test_refreshable_mv_in_system_db Executing query create materialized view system.a refresh every 1 second (x Int64) engine Memory as select number+1 as x from numbers(2);system refresh view system.a; on node1 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c ps -C clickhouse] Stdout: PID TTY TIME CMD Stdout: 9166 ? 00:00:21 clickhouse run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] Command:[docker exec -u root roottestrefreshablemv-gw4-node1-1 bash -c pkill clickhouse] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 Connection dropped: socket connection error: None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 Connection dropped: socket connection error: None run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:9166 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] No clickhouse process running. Start new one. http://localhost:None "POST /v1.46/containers/roottestrefreshablemv-gw4-node1-1/exec HTTP/1.1" 201 74 http://localhost:None "POST /v1.46/exec/1fe6e760b7e183ea3e03d1376799ad660bfb506c233b8e26a279d98c1847666c/start HTTP/1.1" 200 0 http://localhost:None "GET /v1.46/exec/1fe6e760b7e183ea3e03d1376799ad660bfb506c233b8e26a279d98c1847666c/json HTTP/1.1" 200 587 run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:10030 Clickhouse process running. run container_id:roottestrefreshablemv-gw4-node1-1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] Command:[docker exec roottestrefreshablemv-gw4-node1-1 bash -c ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'] Stdout:10030 Executing query select 20 on node1 Executing query select 20 on node1 Executing query select 20 on node1 Executing query system refresh view system.a on node1 Executing query select count(), sum(x) from system.a on node1 Executing query drop table system.a on node1 Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Command:[docker compose --env-file /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/.env --project-name roottestrefreshablemv-gw4 --file /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node2/docker-compose.yml stop --timeout 20] [gw4] PASSED test_refreshable_mv/test.py::test_refreshable_mv_in_system_db Stderr: Container roottestrefreshablemv-gw4-node1-1 Stopping Stderr: Container roottestrefreshablemv-gw4-node2-1 Stopping Stderr: Container roottestrefreshablemv-gw4-node1-1 Stopped Stderr: Container roottestrefreshablemv-gw4-node2-1 Stopped Stderr: Container roottestrefreshablemv-gw4-zoo2-1 Stopping Stderr: Container roottestrefreshablemv-gw4-zoo3-1 Stopping Stderr: Container roottestrefreshablemv-gw4-zoo1-1 Stopping Stderr: Container roottestrefreshablemv-gw4-zoo1-1 Stopped Stderr: Container roottestrefreshablemv-gw4-zoo2-1 Stopped Stderr: Container roottestrefreshablemv-gw4-zoo3-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[bash -c [ -f /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/.env --project-name roottestrefreshablemv-gw4 --file /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_refreshable_mv/_instances-0-gw4/node2/docker-compose.yml down --volumes] Stderr: Container roottestrefreshablemv-gw4-node2-1 Stopping Stderr: Container roottestrefreshablemv-gw4-node1-1 Stopping Stderr: Container roottestrefreshablemv-gw4-node2-1 Stopped Stderr: Container roottestrefreshablemv-gw4-node2-1 Removing Stderr: Container roottestrefreshablemv-gw4-node1-1 Stopped Stderr: Container roottestrefreshablemv-gw4-node1-1 Removing Stderr: Container roottestrefreshablemv-gw4-node2-1 Removed Stderr: Container roottestrefreshablemv-gw4-node1-1 Removed Stderr: Container roottestrefreshablemv-gw4-zoo2-1 Stopping Stderr: Container roottestrefreshablemv-gw4-zoo3-1 Stopping Stderr: Container roottestrefreshablemv-gw4-zoo1-1 Stopping Stderr: Container roottestrefreshablemv-gw4-zoo2-1 Stopped Stderr: Container roottestrefreshablemv-gw4-zoo2-1 Removing Stderr: Container roottestrefreshablemv-gw4-zoo3-1 Stopped Stderr: Container roottestrefreshablemv-gw4-zoo3-1 Removing Stderr: Container roottestrefreshablemv-gw4-zoo1-1 Stopped Stderr: Container roottestrefreshablemv-gw4-zoo1-1 Removing Stderr: Container roottestrefreshablemv-gw4-zoo2-1 Removed Stderr: Container roottestrefreshablemv-gw4-zoo3-1 Removed Stderr: Container roottestrefreshablemv-gw4-zoo1-1 Removed Stderr: Network roottestrefreshablemv-gw4_default Removing Stderr: Network roottestrefreshablemv-gw4_default Removed Cleanup called Docker networks for project roottestrefreshablemv-gw4 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestrefreshablemv-gw4 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestrefreshablemv-gw4 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestrefreshablemv-gw4-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestrefreshablemv-gw4 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Connection dropped: socket connection error: None Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Connection dropped: socket connection error: None Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Connection dropped: socket connection error: None Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster' and host_name='node_1' on node Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster2' and host_name='node_1' on node run container_id:roottestreloadclustersconfig-gw7-node-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n \n \n true\n \n node_1\n 9000\n \n \n node_2\n 9000\n \n \n \n \n\n' > /etc/clickhouse-server/config.d/remote_servers.xml"] Command:[docker exec roottestreloadclustersconfig-gw7-node-1 bash -c echo ' true node_1 9000 node_2 9000 ' > /etc/clickhouse-server/config.d/remote_servers.xml] Executing query SYSTEM RELOAD CONFIG on node Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster' and host_name='node_1' on node Executing query SELECT * FROM system.clusters WHERE cluster='test_cluster2' on node Connection dropped: socket connection error: None run container_id:roottestreloadclustersconfig-gw7-node-1 detach:False nothrow:False cmd: ['bash', '-c', "echo '\n\n \n \n \n true\n \n node_1\n 9000\n \n \n node_2\n 9000\n \n \n \n \n \n true\n \n node_1\n 9000\n \n \n node_2\n 9000\n \n \n \n \n\n' > /etc/clickhouse-server/config.d/remote_servers.xml"] Command:[docker exec roottestreloadclustersconfig-gw7-node-1 bash -c echo ' true node_1 9000 node_2 9000 true node_1 9000 node_2 9000 ' > /etc/clickhouse-server/config.d/remote_servers.xml] Executing query SYSTEM RELOAD CONFIG on node [gw7] PASSED test_reload_clusters_config/test.py::test_delete_cluster test_reload_clusters_config/test.py::test_simple_reload Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Connection dropped: socket connection error: None Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Connection dropped: socket connection error: None Connection dropped: socket connection error: None Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster' and host_name='node_1' on node Executing query SYSTEM RELOAD CONFIG on node Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster' and host_name='node_1' on node [gw7] PASSED test_reload_clusters_config/test.py::test_simple_reload test_reload_clusters_config/test.py::test_update_one_cluster Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Connection dropped: socket connection error: None Connection dropped: socket connection error: None Failed connecting to Zookeeper within the connection retry policy. Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Connection dropped: socket connection error: None Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster' and host_name='node_1' on node Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/.env --project-name roottestreloadclustersconfig-gw7 --file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml stop --timeout 20] [gw7] FAILED test_reload_clusters_config/test.py::test_update_one_cluster Stderr: Container roottestreloadclustersconfig-gw7-node-1 Stopping Stderr: Container roottestreloadclustersconfig-gw7-node-1 Stopped Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Stopping Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Stopping Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Stopping Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Stopped Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Stopped Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Stopped Command:[bash -c [ -f /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/.env --project-name roottestreloadclustersconfig-gw7 --file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml down --volumes] Stderr: Container roottestreloadclustersconfig-gw7-node-1 Stopping Stderr: Container roottestreloadclustersconfig-gw7-node-1 Stopped Stderr: Container roottestreloadclustersconfig-gw7-node-1 Removing Stderr: Container roottestreloadclustersconfig-gw7-node-1 Removed Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Stopping Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Stopping Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Stopping Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Stopped Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Removing Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Stopped Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Removing Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Stopped Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Removing Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Removed Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Removed Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Removed Stderr: Network roottestreloadclustersconfig-gw7_default Removing Stderr: Network roottestreloadclustersconfig-gw7_default Removed Cleanup called Docker networks for project roottestreloadclustersconfig-gw7 are NETWORK ID NAME DRIVER SCOPE Docker containers for project roottestreloadclustersconfig-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Docker volumes for project roottestreloadclustersconfig-gw7 are DRIVER VOLUME NAME Command:[docker container list --all --filter name='^/roottestreloadclustersconfig-gw7-.*-1$' --format '{{.ID}}:{{.Names}}'] Unstopped containers: {} No running containers for project: roottestreloadclustersconfig-gw7 Trying to prune unused networks... Trying to prune unused images... Command:[docker image prune -f] Stdout:Total reclaimed space: 0B Images pruned Trying to prune unused volumes... Command:[docker volume ls | wc -l] Stdout:1 Volumes pruned: 1 =================================== FAILURES =================================== ___________________________ test_update_one_cluster ____________________________ [gw7] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_update_one_cluster(started_cluster): send_repeated_query("distributed") send_repeated_query("distributed2") > assert get_errors_count("test_cluster") > 0 E AssertionError: assert 0 > 0 E + where 0 = get_errors_count('test_cluster') test_reload_clusters_config/test.py:202: AssertionError ------------------------------ Captured log call ------------------------------- 2025-04-02 03:25:19 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:25:33 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:25:45 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:25:58 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:26:10 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:26:24 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:26:37 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:26:50 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:27:03 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:27:15 [ 760 ] DEBUG : Executing query SELECT count() FROM distributed2 SETTINGS receive_timeout=1, handshake_timeout_ms=1 on node (cluster.py:3647, query_and_get_error) 2025-04-02 03:27:28 [ 760 ] DEBUG : Executing query SELECT errors_count FROM system.clusters WHERE cluster='test_cluster' and host_name='node_1' on node (cluster.py:3564, query) ---------------------------- Captured log teardown ----------------------------- 2025-04-02 03:27:28 [ 760 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/.env --project-name roottestreloadclustersconfig-gw7 --file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml stop --timeout 20] (cluster.py:120, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-node-1 Stopping (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-node-1 Stopped (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Stopping (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Stopping (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Stopping (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Stopped (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Stopped (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Stopped (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:120, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/.env --project-name roottestreloadclustersconfig-gw7 --file /ClickHouse/tests/integration/test_reload_clusters_config/_instances-0-gw7/node/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml down --volumes] (cluster.py:120, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-node-1 Stopping (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-node-1 Stopped (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-node-1 Removing (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-node-1 Removed (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Stopping (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Stopping (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Stopping (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Stopped (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Removing (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Stopped (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Removing (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Stopped (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Removing (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo2-1 Removed (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo1-1 Removed (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Container roottestreloadclustersconfig-gw7-zoo3-1 Removed (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Network roottestreloadclustersconfig-gw7_default Removing (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stderr: Network roottestreloadclustersconfig-gw7_default Removed (cluster.py:146, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Cleanup called (cluster.py:876, cleanup) 2025-04-02 03:27:38 [ 760 ] DEBUG : Docker networks for project roottestreloadclustersconfig-gw7 are NETWORK ID NAME DRIVER SCOPE (cluster.py:855, print_all_docker_pieces) 2025-04-02 03:27:38 [ 760 ] DEBUG : Docker containers for project roottestreloadclustersconfig-gw7 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:863, print_all_docker_pieces) 2025-04-02 03:27:38 [ 760 ] DEBUG : Docker volumes for project roottestreloadclustersconfig-gw7 are DRIVER VOLUME NAME (cluster.py:871, print_all_docker_pieces) 2025-04-02 03:27:38 [ 760 ] DEBUG : Command:[docker container list --all --filter name='^/roottestreloadclustersconfig-gw7-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:120, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Unstopped containers: {} (cluster.py:890, cleanup) 2025-04-02 03:27:38 [ 760 ] DEBUG : No running containers for project: roottestreloadclustersconfig-gw7 (cluster.py:904, cleanup) 2025-04-02 03:27:38 [ 760 ] DEBUG : Trying to prune unused networks... (cluster.py:910, cleanup) 2025-04-02 03:27:38 [ 760 ] DEBUG : Trying to prune unused images... (cluster.py:926, cleanup) 2025-04-02 03:27:38 [ 760 ] DEBUG : Command:[docker image prune -f] (cluster.py:120, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:144, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Images pruned (cluster.py:929, cleanup) 2025-04-02 03:27:38 [ 760 ] DEBUG : Trying to prune unused volumes... (cluster.py:935, cleanup) 2025-04-02 03:27:38 [ 760 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:120, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Stdout:1 (cluster.py:144, run_and_check) 2025-04-02 03:27:38 [ 760 ] DEBUG : Volumes pruned: 1 (cluster.py:940, cleanup) ============================== slowest durations =============================== 144.67s call test_reload_clusters_config/test.py::test_add_cluster 137.22s call test_reload_clusters_config/test.py::test_delete_cluster 128.80s call test_reload_clusters_config/test.py::test_update_one_cluster 118.23s call test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke 63.61s call test_reload_clusters_config/test.py::test_simple_reload 52.16s call test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db 36.30s setup test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped 33.44s call test_recompression_ttl/test.py::test_recompression_multiple_ttls 28.53s call test_recompression_ttl/test.py::test_recompression_simple 27.57s setup test_read_only_table/test.py::test_restart_zookeeper 26.70s setup test_s3_cluster/test.py::test_ambiguous_join 26.60s setup test_reload_clusters_config/test.py::test_add_cluster 26.54s call test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped 26.18s call test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3] 24.92s setup test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3] 24.54s teardown test_restore_replica/test.py::test_restore_replica_sequential 23.70s setup test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header] 22.86s call test_role/test.py::test_roles_cache 22.60s teardown test_s3_cluster/test.py::test_distributed_insert_select_with_replicated 22.14s teardown test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format 22.05s teardown test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header] 21.65s setup test_recompression_ttl/test.py::test_recompression_multiple_ttls 21.60s teardown test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped 21.27s call test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables 21.20s teardown test_read_only_table/test.py::test_restart_zookeeper 20.16s call test_read_only_table/test.py::test_restart_zookeeper 19.74s setup test_replication_credentials/test.py::test_credentials_and_no_credentials 19.68s setup test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke 19.30s call test_recompression_ttl/test.py::test_recompression_replicated 18.81s setup test_restore_replica/test.py::test_restore_replica_alive_replicas 18.57s setup test_replica_can_become_leader/test.py::test_can_become_leader 18.02s setup test_replica_is_active/test.py::test_replica_is_active 17.75s setup test_prometheus_protocols/test.py::test_64bit_id 16.43s setup test_postgresql_database_engine/test.py::test_datetime 15.97s call test_restore_replica/test.py::test_restore_replica_sequential 15.76s setup test_relative_filepath/test.py::test_filepath 15.72s setup test_replication_without_zookeeper/test.py::test_startup_without_zookeeper 15.64s setup test_prometheus_endpoint/test.py::test_prometheus_endpoint 15.42s setup test_replicating_constants/test.py::test_different_versions 15.32s setup test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers 14.98s setup test_reloading_settings_from_users_xml/test.py::test_force_reload 14.95s call test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop 14.91s call test_restore_replica/test.py::test_restore_replica_parallel 14.49s call test_recovery_time_metric/test.py::test_recovery_time_metric 14.47s setup test_recovery_time_metric/test.py::test_recovery_time_metric 14.06s setup test_role/test.py::test_admin_option 13.92s call test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers 13.53s setup test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop 13.48s call test_replication_without_zookeeper/test.py::test_startup_without_zookeeper 13.41s setup test_render_log_file_name_templates/test.py::test_check_file_names 13.03s teardown test_replication_credentials/test.py::test_same_credentials 12.69s setup test_reload_certificate/test.py::test_ECcert_reload 12.41s setup test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 12.14s call test_refreshable_mv/test.py::test_refreshable_mv_in_system_db 10.93s call test_role/test.py::test_role_expiration[True] 10.47s call test_role/test.py::test_role_expiration[False] 10.34s teardown test_recompression_ttl/test.py::test_recompression_simple 10.28s teardown test_reload_clusters_config/test.py::test_update_one_cluster 10.26s call test_restore_replica/test.py::test_restore_replica_alive_replicas 9.76s call test_role/test.py::test_introspection 8.37s setup test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order 8.31s call test_role/test.py::test_revoke_requires_admin_option 8.23s call test_restart_server/test.py::test_flushes_async_insert_queue 7.94s call test_replica_is_active/test.py::test_replica_is_active 7.93s call test_prometheus_protocols/test.py::test_external_tables 7.87s teardown test_prometheus_protocols/test.py::test_tags_to_columns 7.45s setup test_restart_server/test.py::test_drop_memory_database 7.24s teardown test_replicating_constants/test.py::test_different_versions 6.83s teardown test_role/test.py::test_set_role 6.76s call test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2] 6.72s teardown test_refreshable_mv/test.py::test_refreshable_mv_in_system_db 6.17s teardown test_replica_can_become_leader/test.py::test_can_become_leader 6.12s teardown test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout 5.88s teardown test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers 5.83s call test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache 5.77s teardown test_postgresql_database_engine/test.py::test_predefined_connection_configuration 5.47s teardown test_relative_filepath/test.py::test_filepath 5.35s call test_s3_cluster/test.py::test_distributed_insert_select_with_replicated 5.28s call test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4] 5.27s call test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format 5.20s call test_postgresql_database_engine/test.py::test_predefined_connection_configuration 5.08s setup test_range_hashed_dictionary_types/test.py::test_range_hashed_dict 5.01s call test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout 4.88s call test_replication_credentials/test.py::test_credentials_and_no_credentials 4.87s teardown test_render_log_file_name_templates/test.py::test_check_file_names 4.84s call test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 4.44s teardown test_reload_certificate/test.py::test_first_than_second_cert 4.36s teardown test_replica_is_active/test.py::test_replica_is_active 4.31s call test_replication_credentials/test.py::test_no_credentials 4.28s call test_replication_credentials/test.py::test_different_credentials 4.27s call test_restart_server/test.py::test_drop_memory_database 4.15s teardown test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order 3.97s call test_prometheus_protocols/test.py::test_create_as_table 3.92s call test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl 3.89s teardown test_prometheus_endpoint/test.py::test_prometheus_endpoint 3.64s teardown test_range_hashed_dictionary_types/test.py::test_range_hashed_dict 3.48s call test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1] 3.47s call test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3] 3.36s call test_replication_credentials/test.py::test_same_credentials 3.11s call test_s3_cluster/test.py::test_cluster_with_header 2.83s call test_postgresql_database_engine/test.py::test_postgresql_database_with_schema 2.73s call test_prometheus_protocols/test.py::test_tags_to_columns 2.67s teardown test_recovery_time_metric/test.py::test_recovery_time_metric 2.46s call test_prometheus_protocols/test.py::test_default 2.41s call test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped 2.40s call test_prometheus_protocols/test.py::test_custom_id_algorithm 2.36s call test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header] 2.34s call test_s3_cluster/test.py::test_cluster_default_expression 2.32s call test_role/test.py::test_admin_option 2.23s teardown test_replication_without_zookeeper/test.py::test_startup_without_zookeeper 2.23s call test_postgresql_database_engine/test.py::test_postgresql_password_leak 2.18s call test_role/test.py::test_combine_privileges 2.08s call test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header] 2.07s call test_rocksdb_read_only/test.py::test_read_only 2.04s call test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl 2.03s teardown test_prometheus_protocols/test.py::test_inner_engines 1.99s call test_s3_cluster/test.py::test_cluster_format_detection 1.94s call test_role/test.py::test_function_current_roles 1.93s call test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header] 1.84s call test_role/test.py::test_create_role 1.83s call test_role/test.py::test_grant_role_to_role 1.75s call test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout 1.73s call test_postgresql_database_engine/test.py::test_postgres_database_old_syntax 1.72s call test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format 1.71s call test_reloading_settings_from_users_xml/test.py::test_force_reload 1.70s call test_postgresql_database_engine/test.py::test_datetime 1.70s teardown test_restart_server/test.py::test_flushes_async_insert_queue 1.57s call test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries 1.49s call test_prometheus_endpoint/test.py::test_prometheus_endpoint 1.44s call test_prometheus_protocols/test.py::test_inner_engines 1.38s teardown test_prometheus_protocols/test.py::test_remote_write_v1_status_code 1.38s call test_role/test.py::test_changing_default_roles_affects_new_sessions_only 1.38s call test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays 1.35s call test_prometheus_protocols/test.py::test_64bit_id 1.34s call test_s3_cluster/test.py::test_cluster_with_named_collection 1.33s teardown test_prometheus_protocols/test.py::test_external_tables 1.32s teardown test_prometheus_protocols/test.py::test_default 1.29s setup test_replication_credentials/test.py::test_same_credentials 1.28s teardown test_prometheus_protocols/test.py::test_read_auth 1.26s call test_relative_filepath/test.py::test_filepath 1.23s call test_reload_certificate/test.py::test_ECcert_reload 1.21s call test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum 1.18s teardown test_rocksdb_read_only/test.py::test_read_only 1.10s teardown test_prometheus_protocols/test.py::test_create_as_table 1.04s call test_role/test.py::test_set_role 1.03s teardown test_prometheus_protocols/test.py::test_64bit_id 1.03s teardown test_prometheus_protocols/test.py::test_custom_id_algorithm 1.01s teardown test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 0.99s call test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload 0.98s call test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int 0.88s call test_reload_certificate/test.py::test_cert_with_pass_phrase 0.88s call test_reload_certificate/test.py::test_chain_reload 0.83s teardown test_role/test.py::test_combine_privileges 0.73s call test_s3_cluster/test.py::test_count_macro 0.72s call test_reload_certificate/test.py::test_first_than_second_cert 0.72s call test_range_hashed_dictionary_types/test.py::test_range_hashed_dict 0.66s call test_postgresql_database_engine/test.py::test_postgresql_fetch_tables 0.65s call test_restore_replica/test.py::test_restore_replica_invalid_tables 0.62s setup test_replication_credentials/test.py::test_different_credentials 0.59s setup test_replication_credentials/test.py::test_no_credentials 0.58s call test_s3_cluster/test.py::test_ambiguous_join 0.58s call test_s3_cluster/test.py::test_count 0.54s teardown test_role/test.py::test_role_expiration[False] 0.49s call test_replicating_constants/test.py::test_different_versions 0.48s call test_replica_can_become_leader/test.py::test_can_become_leader 0.48s call test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order 0.44s teardown test_role/test.py::test_roles_cache 0.44s teardown test_role/test.py::test_create_role 0.43s teardown test_role/test.py::test_admin_option 0.43s teardown test_role/test.py::test_role_expiration[True] 0.38s teardown test_role/test.py::test_revoke_requires_admin_option 0.38s teardown test_role/test.py::test_grant_role_to_role 0.33s teardown test_role/test.py::test_function_current_roles 0.33s teardown test_role/test.py::test_introspection 0.33s teardown test_role/test.py::test_changing_default_roles_affects_new_sessions_only 0.33s setup test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload 0.31s call test_prometheus_protocols/test.py::test_read_auth 0.31s setup test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout 0.31s setup test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum 0.27s setup test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout 0.27s call test_render_log_file_name_templates/test.py::test_check_file_names 0.27s setup test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int 0.24s call test_prometheus_protocols/test.py::test_remote_write_v1_status_code 0.02s teardown test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables 0.01s teardown test_recompression_ttl/test.py::test_recompression_multiple_ttls 0.01s teardown test_reload_clusters_config/test.py::test_add_cluster 0.01s teardown test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3] 0.00s teardown test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke 0.00s setup test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped 0.00s teardown test_restore_replica/test.py::test_restore_replica_parallel 0.00s teardown test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped 0.00s setup test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db 0.00s setup test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3_plain] 0.00s setup test_recompression_ttl/test.py::test_recompression_replicated 0.00s setup test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl 0.00s setup test_reload_clusters_config/test.py::test_delete_cluster 0.00s setup test_role/test.py::test_revoke_requires_admin_option 0.00s setup test_restore_replica/test.py::test_restore_replica_sequential 0.00s setup test_role/test.py::test_role_expiration[False] 0.00s teardown test_replication_credentials/test.py::test_no_credentials 0.00s teardown test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header] 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_password_leak 0.00s setup test_prometheus_protocols/test.py::test_read_auth 0.00s setup test_role/test.py::test_role_expiration[True] 0.00s teardown test_restore_replica/test.py::test_restore_replica_alive_replicas 0.00s setup test_prometheus_protocols/test.py::test_default 0.00s setup test_prometheus_protocols/test.py::test_external_tables 0.00s setup test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header] 0.00s setup test_reload_clusters_config/test.py::test_update_one_cluster 0.00s setup test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0] 0.00s teardown test_reload_clusters_config/test.py::test_delete_cluster 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries 0.00s call test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3_plain] 0.00s setup test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3] 0.00s setup test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header] 0.00s setup test_postgresql_database_engine/test.py::test_postgres_database_old_syntax 0.00s setup test_role/test.py::test_function_current_roles 0.00s teardown test_s3_cluster/test.py::test_ambiguous_join 0.00s setup test_role/test.py::test_combine_privileges 0.00s teardown test_replication_credentials/test.py::test_credentials_and_no_credentials 0.00s teardown test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db 0.00s setup test_restore_replica/test.py::test_restore_replica_parallel 0.00s setup test_role/test.py::test_introspection 0.00s setup test_prometheus_protocols/test.py::test_remote_write_v1_status_code 0.00s setup test_prometheus_protocols/test.py::test_tags_to_columns 0.00s setup test_role/test.py::test_changing_default_roles_affects_new_sessions_only 0.00s setup test_role/test.py::test_create_role 0.00s setup test_role/test.py::test_roles_cache 0.00s teardown test_restart_server/test.py::test_drop_memory_database 0.00s teardown test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop 0.00s setup test_rocksdb_read_only/test.py::test_read_only 0.00s setup test_prometheus_protocols/test.py::test_inner_engines 0.00s teardown test_reloading_settings_from_users_xml/test.py::test_force_reload 0.00s teardown test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3_plain] 0.00s setup test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2] 0.00s setup test_reload_certificate/test.py::test_chain_reload 0.00s setup test_prometheus_protocols/test.py::test_create_as_table 0.00s setup test_refreshable_mv/test.py::test_refreshable_mv_in_system_db 0.00s setup test_prometheus_protocols/test.py::test_custom_id_algorithm 0.00s setup test_role/test.py::test_grant_role_to_role 0.00s teardown test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4] 0.00s setup test_reload_certificate/test.py::test_first_than_second_cert 0.00s setup test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays 0.00s teardown test_reload_clusters_config/test.py::test_simple_reload 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl 0.00s setup test_postgresql_database_engine/test.py::test_predefined_connection_configuration 0.00s setup test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4] 0.00s setup test_role/test.py::test_set_role 0.00s teardown test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format 0.00s setup test_reload_certificate/test.py::test_cert_with_pass_phrase 0.00s setup test_s3_cluster/test.py::test_count_macro 0.00s setup test_recompression_ttl/test.py::test_recompression_simple 0.00s teardown test_replication_credentials/test.py::test_different_credentials 0.00s teardown test_reload_certificate/test.py::test_ECcert_reload 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache 0.00s teardown test_postgresql_database_engine/test.py::test_datetime 0.00s teardown test_postgresql_database_engine/test.py::test_postgres_database_old_syntax 0.00s setup test_s3_cluster/test.py::test_cluster_with_header 0.00s teardown test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout 0.00s setup test_s3_cluster/test.py::test_cluster_format_detection 0.00s setup test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1] 0.00s setup test_s3_cluster/test.py::test_cluster_default_expression 0.00s setup test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format 0.00s teardown test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum 0.00s setup test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables 0.00s setup test_restart_server/test.py::test_flushes_async_insert_queue 0.00s setup test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format 0.00s teardown test_recompression_ttl/test.py::test_recompression_replicated 0.00s teardown test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl 0.00s setup test_restore_replica/test.py::test_restore_replica_invalid_tables 0.00s teardown test_restore_replica/test.py::test_restore_replica_invalid_tables 0.00s setup test_s3_cluster/test.py::test_count 0.00s teardown test_reload_certificate/test.py::test_chain_reload 0.00s setup test_reload_clusters_config/test.py::test_simple_reload 0.00s teardown test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header] 0.00s teardown test_s3_cluster/test.py::test_count 0.00s teardown test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1] 0.00s teardown test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries 0.00s teardown test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3] 0.00s teardown test_s3_cluster/test.py::test_cluster_default_expression 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_database_with_schema 0.00s setup test_s3_cluster/test.py::test_distributed_insert_select_with_replicated 0.00s setup test_postgresql_database_engine/test.py::test_postgresql_fetch_tables 0.00s setup test_s3_cluster/test.py::test_cluster_with_named_collection 0.00s teardown test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays 0.00s teardown test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2] 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_password_leak 0.00s call test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0] 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_fetch_tables 0.00s teardown test_s3_cluster/test.py::test_cluster_with_named_collection 0.00s teardown test_reload_certificate/test.py::test_cert_with_pass_phrase 0.00s teardown test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache 0.00s teardown test_s3_cluster/test.py::test_cluster_format_detection 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl 0.00s teardown test_s3_cluster/test.py::test_count_macro 0.00s teardown test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0] 0.00s teardown test_postgresql_database_engine/test.py::test_postgresql_database_with_schema 0.00s teardown test_s3_cluster/test.py::test_cluster_with_header =========================== short test summary info ============================ FAILED test_reload_clusters_config/test.py::test_update_one_cluster - Asserti... PASSED test_reload_certificate/test.py::test_ECcert_reload PASSED test_reload_certificate/test.py::test_cert_with_pass_phrase PASSED test_reload_certificate/test.py::test_chain_reload PASSED test_role/test.py::test_admin_option PASSED test_reload_certificate/test.py::test_first_than_second_cert PASSED test_reloading_settings_from_users_xml/test.py::test_force_reload PASSED test_postgresql_database_engine/test.py::test_datetime PASSED test_role/test.py::test_changing_default_roles_affects_new_sessions_only PASSED test_prometheus_protocols/test.py::test_64bit_id PASSED test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays PASSED test_role/test.py::test_combine_privileges PASSED test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout PASSED test_role/test.py::test_create_role PASSED test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum PASSED test_prometheus_protocols/test.py::test_create_as_table PASSED test_replication_credentials/test.py::test_credentials_and_no_credentials PASSED test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int PASSED test_role/test.py::test_function_current_roles PASSED test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload PASSED test_s3_cluster/test.py::test_ambiguous_join PASSED test_prometheus_protocols/test.py::test_custom_id_algorithm PASSED test_role/test.py::test_grant_role_to_role PASSED test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout PASSED test_restore_replica/test.py::test_restore_replica_alive_replicas PASSED test_replication_credentials/test.py::test_different_credentials PASSED test_s3_cluster/test.py::test_cluster_default_expression PASSED test_restore_replica/test.py::test_restore_replica_invalid_tables PASSED test_prometheus_protocols/test.py::test_default PASSED test_s3_cluster/test.py::test_cluster_format_detection PASSED test_replication_credentials/test.py::test_no_credentials PASSED test_s3_cluster/test.py::test_cluster_with_header PASSED test_s3_cluster/test.py::test_cluster_with_named_collection PASSED test_s3_cluster/test.py::test_count PASSED test_s3_cluster/test.py::test_count_macro PASSED test_role/test.py::test_introspection PASSED test_replication_credentials/test.py::test_same_credentials PASSED test_prometheus_protocols/test.py::test_external_tables PASSED test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables PASSED test_s3_cluster/test.py::test_distributed_insert_select_with_replicated PASSED test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl PASSED test_prometheus_protocols/test.py::test_inner_engines PASSED test_postgresql_database_engine/test.py::test_postgres_database_old_syntax PASSED test_restore_replica/test.py::test_restore_replica_parallel PASSED test_prometheus_protocols/test.py::test_read_auth PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries PASSED test_role/test.py::test_revoke_requires_admin_option PASSED test_prometheus_protocols/test.py::test_remote_write_v1_status_code PASSED test_prometheus_protocols/test.py::test_tags_to_columns PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3_plain] PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache PASSED test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0] PASSED test_role/test.py::test_role_expiration[False] PASSED test_postgresql_database_engine/test.py::test_postgresql_database_with_schema PASSED test_postgresql_database_engine/test.py::test_postgresql_fetch_tables PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1] PASSED test_restore_replica/test.py::test_restore_replica_sequential PASSED test_postgresql_database_engine/test.py::test_postgresql_password_leak PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2] PASSED test_postgresql_database_engine/test.py::test_predefined_connection_configuration PASSED test_role/test.py::test_role_expiration[True] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3] PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4] PASSED test_recompression_ttl/test.py::test_recompression_multiple_ttls PASSED test_restart_server/test.py::test_drop_memory_database PASSED test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format PASSED test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header] PASSED test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header] PASSED test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header] PASSED test_restart_server/test.py::test_flushes_async_insert_queue PASSED test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop PASSED test_rocksdb_read_only/test.py::test_read_only PASSED test_prometheus_endpoint/test.py::test_prometheus_endpoint PASSED test_role/test.py::test_roles_cache PASSED test_role/test.py::test_set_role PASSED test_recompression_ttl/test.py::test_recompression_replicated PASSED test_range_hashed_dictionary_types/test.py::test_range_hashed_dict PASSED test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order PASSED test_relative_filepath/test.py::test_filepath PASSED test_replicating_constants/test.py::test_different_versions PASSED test_recompression_ttl/test.py::test_recompression_simple PASSED test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped PASSED test_recovery_time_metric/test.py::test_recovery_time_metric PASSED test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers PASSED test_read_only_table/test.py::test_restart_zookeeper PASSED test_render_log_file_name_templates/test.py::test_check_file_names PASSED test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped PASSED test_replica_can_become_leader/test.py::test_can_become_leader PASSED test_replication_without_zookeeper/test.py::test_startup_without_zookeeper PASSED test_reload_clusters_config/test.py::test_add_cluster PASSED test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke PASSED test_replica_is_active/test.py::test_replica_is_active PASSED test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable PASSED test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db PASSED test_refreshable_mv/test.py::test_refreshable_mv_in_system_db PASSED test_reload_clusters_config/test.py::test_delete_cluster PASSED test_reload_clusters_config/test.py::test_simple_reload =================== 1 failed, 99 passed in 513.74s (0:08:33) =================== Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 528, in subprocess.check_call(cmd, shell=True, bufsize=0) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_4wrvlh --privileged --dns-search='.' --memory=30709018624 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=8b2301119731 -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=caad4729259e -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_postgresql_database_engine/test.py::test_datetime test_postgresql_database_engine/test.py::test_get_create_table_query_with_multidim_arrays test_postgresql_database_engine/test.py::test_inaccessible_postgresql_database_engine_filterable_on_system_tables test_postgresql_database_engine/test.py::test_postgres_database_engine_with_postgres_ddl test_postgresql_database_engine/test.py::test_postgres_database_old_syntax test_postgresql_database_engine/test.py::test_postgresql_database_engine_queries test_postgresql_database_engine/test.py::test_postgresql_database_engine_table_cache test_postgresql_database_engine/test.py::test_postgresql_database_engine_with_clickhouse_ddl test_postgresql_database_engine/test.py::test_postgresql_database_with_schema test_postgresql_database_engine/test.py::test_postgresql_fetch_tables test_postgresql_database_engine/test.py::test_postgresql_password_leak test_postgresql_database_engine/test.py::test_predefined_connection_configuration test_profile_settings_and_constraints_order/test.py::test_profile_settings_and_constraints_order test_prometheus_endpoint/test.py::test_prometheus_endpoint test_prometheus_protocols/test.py::test_64bit_id test_prometheus_protocols/test.py::test_create_as_table test_prometheus_protocols/test.py::test_custom_id_algorithm test_prometheus_protocols/test.py::test_default test_prometheus_protocols/test.py::test_external_tables test_prometheus_protocols/test.py::test_inner_engines test_prometheus_protocols/test.py::test_read_auth test_prometheus_protocols/test.py::test_remote_write_v1_status_code test_prometheus_protocols/test.py::test_tags_to_columns test_range_hashed_dictionary_types/test.py::test_range_hashed_dict test_read_only_table/test.py::test_restart_zookeeper test_recompression_ttl/test.py::test_recompression_multiple_ttls test_recompression_ttl/test.py::test_recompression_replicated test_recompression_ttl/test.py::test_recompression_simple test_recovery_time_metric/test.py::test_recovery_time_metric test_refreshable_mv/test.py::test_refresh_vs_shutdown_smoke test_refreshable_mv/test.py::test_refreshable_mv_in_replicated_db test_refreshable_mv/test.py::test_refreshable_mv_in_system_db test_relative_filepath/test.py::test_filepath test_reload_auxiliary_zookeepers/test.py::test_reload_auxiliary_zookeepers test_reload_certificate/test.py::test_ECcert_reload test_reload_certificate/test.py::test_cert_with_pass_phrase test_reload_certificate/test.py::test_chain_reload test_reload_certificate/test.py::test_first_than_second_cert test_reload_clusters_config/test.py::test_add_cluster test_reload_clusters_config/test.py::test_delete_cluster test_reload_clusters_config/test.py::test_simple_reload test_reload_clusters_config/test.py::test_update_one_cluster test_reloading_settings_from_users_xml/test.py::test_force_reload test_reloading_settings_from_users_xml/test.py::test_reload_on_timeout test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_enum test_reloading_settings_from_users_xml/test.py::test_unexpected_setting_int test_reloading_settings_from_users_xml/test.py::test_unknown_setting_force_reload test_reloading_settings_from_users_xml/test.py::test_unknown_setting_reload_on_timeout 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_log_table[s3_plain]' test_remote_blobs_naming/test_backward_compatibility.py::test_read_new_format 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case0]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case1]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case2]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case3]' 'test_remote_blobs_naming/test_backward_compatibility.py::test_replicated_merge_tree[test_case4]' test_remote_blobs_naming/test_backward_compatibility.py::test_write_new_format test_render_log_file_name_templates/test.py::test_check_file_names test_replica_can_become_leader/test.py::test_can_become_leader test_replica_is_active/test.py::test_replica_is_active test_replicated_zero_copy_projection_mutation/test.py::test_all_projection_files_are_dropped_when_part_is_dropped test_replicated_zero_copy_projection_mutation/test.py::test_hardlinks_preserved_when_projection_dropped test_replicating_constants/test.py::test_different_versions test_replication_credentials/test.py::test_credentials_and_no_credentials test_replication_credentials/test.py::test_different_credentials test_replication_credentials/test.py::test_no_credentials test_replication_credentials/test.py::test_same_credentials test_replication_without_zookeeper/test.py::test_startup_without_zookeeper test_restart_server/test.py::test_drop_memory_database test_restart_server/test.py::test_flushes_async_insert_queue test_restore_replica/test.py::test_restore_replica_alive_replicas test_restore_replica/test.py::test_restore_replica_invalid_tables test_restore_replica/test.py::test_restore_replica_parallel test_restore_replica/test.py::test_restore_replica_sequential test_rocksdb_read_only/test.py::test_dirctory_missing_after_stop test_rocksdb_read_only/test.py::test_read_only test_role/test.py::test_admin_option test_role/test.py::test_changing_default_roles_affects_new_sessions_only test_role/test.py::test_combine_privileges test_role/test.py::test_create_role test_role/test.py::test_function_current_roles test_role/test.py::test_grant_role_to_role test_role/test.py::test_introspection test_role/test.py::test_revoke_requires_admin_option 'test_role/test.py::test_role_expiration[False]' 'test_role/test.py::test_role_expiration[True]' test_role/test.py::test_roles_cache test_role/test.py::test_set_role test_runtime_configurable_cache_size/test.py::test_query_cache_size_is_runtime_configurable 'test_s3_access_headers/test.py::test_custom_access_header[test_access_key_id_overrides_access_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_access_over_custom_header]' 'test_s3_access_headers/test.py::test_custom_access_header[test_named_coll_overrides_access_header]' test_s3_cluster/test.py::test_ambiguous_join test_s3_cluster/test.py::test_cluster_default_expression test_s3_cluster/test.py::test_cluster_format_detection test_s3_cluster/test.py::test_cluster_with_header test_s3_cluster/test.py::test_cluster_with_named_collection test_s3_cluster/test.py::test_count test_s3_cluster/test.py::test_count_macro test_s3_cluster/test.py::test_distributed_insert_select_with_replicated -vvv -ss" altinityinfra/integration-tests-runner:2165613c5fcd ' returned non-zero exit status 1.